Test Report: QEMU_macOS 19337

                    
                      a9f4e4a9a8ef6f7d1064a3bd8285d9113f3d3767:2024-07-29:35545
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.23
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.93
36 TestAddons/Setup 10.06
37 TestCertOptions 10.06
38 TestCertExpiration 195.23
39 TestDockerFlags 10.04
40 TestForceSystemdFlag 10.04
41 TestForceSystemdEnv 10.75
47 TestErrorSpam/setup 10.02
56 TestFunctional/serial/StartWithProxy 9.99
58 TestFunctional/serial/SoftStart 5.26
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
70 TestFunctional/serial/MinikubeKubectlCmd 0.74
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.98
72 TestFunctional/serial/ExtraConfig 5.25
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.12
86 TestFunctional/parallel/ServiceCmdConnect 0.13
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.12
91 TestFunctional/parallel/CpCmd 0.27
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.29
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.03
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.04
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.04
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 90.76
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.3
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.12
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 38.82
150 TestMultiControlPlane/serial/StartCluster 9.84
151 TestMultiControlPlane/serial/DeployApp 115.36
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.1
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
159 TestMultiControlPlane/serial/RestartSecondaryNode 55.03
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.16
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.07
164 TestMultiControlPlane/serial/StopCluster 3.49
165 TestMultiControlPlane/serial/RestartCluster 5.24
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.07
171 TestImageBuild/serial/Setup 9.91
174 TestJSONOutput/start/Command 10.01
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.04
203 TestMinikubeProfile 10.26
206 TestMountStart/serial/StartWithMountFirst 9.98
209 TestMultiNode/serial/FreshStart2Nodes 9.97
210 TestMultiNode/serial/DeployApp2Nodes 97.74
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.08
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 52.8
218 TestMultiNode/serial/RestartKeepsNodes 7.21
219 TestMultiNode/serial/DeleteNode 0.1
220 TestMultiNode/serial/StopMultiNode 4.15
221 TestMultiNode/serial/RestartMultiNode 5.25
222 TestMultiNode/serial/ValidateNameConflict 20.41
226 TestPreload 10.07
228 TestScheduledStopUnix 9.92
229 TestSkaffold 12.32
232 TestRunningBinaryUpgrade 588.93
234 TestKubernetesUpgrade 18.27
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.4
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.33
250 TestStoppedBinaryUpgrade/Upgrade 574.69
252 TestPause/serial/Start 9.88
262 TestNoKubernetes/serial/StartWithK8s 9.87
263 TestNoKubernetes/serial/StartWithStopK8s 5.31
264 TestNoKubernetes/serial/Start 5.28
268 TestNoKubernetes/serial/StartNoArgs 5.28
270 TestNetworkPlugins/group/auto/Start 9.99
271 TestNetworkPlugins/group/kindnet/Start 10.19
272 TestNetworkPlugins/group/calico/Start 10.03
273 TestNetworkPlugins/group/custom-flannel/Start 9.79
274 TestNetworkPlugins/group/false/Start 9.79
275 TestNetworkPlugins/group/enable-default-cni/Start 9.96
276 TestNetworkPlugins/group/flannel/Start 9.81
277 TestNetworkPlugins/group/bridge/Start 9.8
278 TestNetworkPlugins/group/kubenet/Start 9.91
280 TestStartStop/group/old-k8s-version/serial/FirstStart 10.07
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.23
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.1
292 TestStartStop/group/no-preload/serial/FirstStart 10.27
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
297 TestStartStop/group/no-preload/serial/SecondStart 5.23
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/no-preload/serial/Pause 0.1
303 TestStartStop/group/embed-certs/serial/FirstStart 9.98
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.93
306 TestStartStop/group/embed-certs/serial/DeployApp 0.1
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.14
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
313 TestStartStop/group/embed-certs/serial/SecondStart 5.25
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.27
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
318 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
319 TestStartStop/group/embed-certs/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/FirstStart 10
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
330 TestStartStop/group/newest-cni/serial/SecondStart 5.26
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (16.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-462000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-462000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (16.226768584s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c62e80b9-2b0a-4b0d-9884-e9cfb54145d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-462000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c64636a-6751-408b-863a-4bd53e8e6a88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19337"}}
	{"specversion":"1.0","id":"b34f18b2-2dbe-4b5f-abe3-0f215d2f2c01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig"}}
	{"specversion":"1.0","id":"b99aac9a-d0b3-44b5-8c59-0ebe79b7333d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4a087b8c-0ae4-4744-a805-9f36a4d78ebd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"12fe1c92-45fe-4699-bde0-3adc7ae87ed9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube"}}
	{"specversion":"1.0","id":"65cec0c4-c238-41e8-994e-69086b28ec26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"994fa63e-9c38-4794-897d-cff0d1dc1042","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"abddb8ea-7af3-4ceb-b543-2a9262f84863","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"ecdc0a8a-6575-4939-b603-02bf5929b749","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a4a7ab2-eb70-4b7e-8c93-03834262ee18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-462000\" primary control-plane node in \"download-only-462000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b3854ec-7127-4ea5-91f5-abf5fd95a6b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1971d93-9ac8-4f7b-aff5-f865ca6d85f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10692da60 0x10692da60 0x10692da60 0x10692da60 0x10692da60 0x10692da60 0x10692da60] Decompressors:map[bz2:0x14000901920 gz:0x14000901928 tar:0x140009018d0 tar.bz2:0x140009018e0 tar.gz:0x140009018f0 tar.xz:0x14000901900 tar.zst:0x14000901910 tbz2:0x140009018e0 tgz:0x14
0009018f0 txz:0x14000901900 tzst:0x14000901910 xz:0x14000901930 zip:0x14000901940 zst:0x14000901938] Getters:map[file:0x14001514550 http:0x140005fa1e0 https:0x140005fa230] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"994e8642-94db-4a7a-8827-16fb14b18c48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:19:19.727600    6845 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:19:19.727797    6845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:19:19.727801    6845 out.go:304] Setting ErrFile to fd 2...
	I0729 03:19:19.727804    6845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:19:19.727921    6845 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	W0729 03:19:19.728006    6845 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19337-6349/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19337-6349/.minikube/config/config.json: no such file or directory
	I0729 03:19:19.729346    6845 out.go:298] Setting JSON to true
	I0729 03:19:19.747044    6845 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4728,"bootTime":1722243631,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:19:19.747119    6845 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:19:19.752719    6845 out.go:97] [download-only-462000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:19:19.752870    6845 notify.go:220] Checking for updates...
	W0729 03:19:19.752919    6845 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 03:19:19.755701    6845 out.go:169] MINIKUBE_LOCATION=19337
	I0729 03:19:19.758728    6845 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:19:19.762972    6845 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:19:19.766547    6845 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:19:19.770718    6845 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	W0729 03:19:19.775268    6845 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 03:19:19.775456    6845 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:19:19.778703    6845 out.go:97] Using the qemu2 driver based on user configuration
	I0729 03:19:19.778721    6845 start.go:297] selected driver: qemu2
	I0729 03:19:19.778733    6845 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:19:19.778794    6845 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:19:19.781708    6845 out.go:169] Automatically selected the socket_vmnet network
	I0729 03:19:19.787916    6845 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 03:19:19.788014    6845 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 03:19:19.788062    6845 cni.go:84] Creating CNI manager for ""
	I0729 03:19:19.788079    6845 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 03:19:19.788124    6845 start.go:340] cluster config:
	{Name:download-only-462000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:19:19.791687    6845 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:19:19.795700    6845 out.go:97] Downloading VM boot image ...
	I0729 03:19:19.795716    6845 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 03:19:27.337676    6845 out.go:97] Starting "download-only-462000" primary control-plane node in "download-only-462000" cluster
	I0729 03:19:27.337695    6845 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:19:27.391638    6845 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 03:19:27.391644    6845 cache.go:56] Caching tarball of preloaded images
	I0729 03:19:27.392210    6845 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:19:27.396704    6845 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 03:19:27.396710    6845 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:19:27.474385    6845 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 03:19:34.789991    6845 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:19:34.790169    6845 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:19:35.484454    6845 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 03:19:35.484680    6845 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/download-only-462000/config.json ...
	I0729 03:19:35.484700    6845 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/download-only-462000/config.json: {Name:mkcb052033094f2f2cc451596777a23309f06e5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:19:35.485805    6845 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:19:35.486161    6845 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 03:19:35.869108    6845 out.go:169] 
	W0729 03:19:35.875020    6845 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10692da60 0x10692da60 0x10692da60 0x10692da60 0x10692da60 0x10692da60 0x10692da60] Decompressors:map[bz2:0x14000901920 gz:0x14000901928 tar:0x140009018d0 tar.bz2:0x140009018e0 tar.gz:0x140009018f0 tar.xz:0x14000901900 tar.zst:0x14000901910 tbz2:0x140009018e0 tgz:0x140009018f0 txz:0x14000901900 tzst:0x14000901910 xz:0x14000901930 zip:0x14000901940 zst:0x14000901938] Getters:map[file:0x14001514550 http:0x140005fa1e0 https:0x140005fa230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 03:19:35.875044    6845 out_reason.go:110] 
	W0729 03:19:35.882081    6845 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:19:35.886843    6845 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-462000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (16.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.93s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-405000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-405000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.78689025s)

                                                
                                                
-- stdout --
	* [offline-docker-405000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-405000" primary control-plane node in "offline-docker-405000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-405000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:31:29.493560    8529 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:31:29.493726    8529 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:31:29.493732    8529 out.go:304] Setting ErrFile to fd 2...
	I0729 03:31:29.493734    8529 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:31:29.493900    8529 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:31:29.495128    8529 out.go:298] Setting JSON to false
	I0729 03:31:29.513087    8529 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5458,"bootTime":1722243631,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:31:29.513212    8529 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:31:29.518533    8529 out.go:177] * [offline-docker-405000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:31:29.526406    8529 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:31:29.526421    8529 notify.go:220] Checking for updates...
	I0729 03:31:29.535251    8529 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:31:29.538328    8529 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:31:29.541225    8529 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:31:29.544253    8529 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:31:29.547295    8529 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:31:29.550615    8529 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:31:29.550685    8529 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:31:29.555307    8529 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:31:29.561279    8529 start.go:297] selected driver: qemu2
	I0729 03:31:29.561286    8529 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:31:29.561294    8529 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:31:29.563430    8529 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:31:29.566270    8529 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:31:29.570435    8529 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:31:29.570468    8529 cni.go:84] Creating CNI manager for ""
	I0729 03:31:29.570476    8529 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:31:29.570480    8529 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:31:29.570519    8529 start.go:340] cluster config:
	{Name:offline-docker-405000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-405000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:31:29.574650    8529 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:31:29.582266    8529 out.go:177] * Starting "offline-docker-405000" primary control-plane node in "offline-docker-405000" cluster
	I0729 03:31:29.586285    8529 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:31:29.586372    8529 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:31:29.586413    8529 cache.go:56] Caching tarball of preloaded images
	I0729 03:31:29.586541    8529 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:31:29.586549    8529 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:31:29.586617    8529 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/offline-docker-405000/config.json ...
	I0729 03:31:29.586628    8529 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/offline-docker-405000/config.json: {Name:mk3e8e4e7a596c85490d176a853af0c0b5f1120f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:31:29.586939    8529 start.go:360] acquireMachinesLock for offline-docker-405000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:31:29.586978    8529 start.go:364] duration metric: took 30.5µs to acquireMachinesLock for "offline-docker-405000"
	I0729 03:31:29.586991    8529 start.go:93] Provisioning new machine with config: &{Name:offline-docker-405000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-405000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:31:29.587034    8529 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:31:29.595312    8529 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 03:31:29.611398    8529 start.go:159] libmachine.API.Create for "offline-docker-405000" (driver="qemu2")
	I0729 03:31:29.611427    8529 client.go:168] LocalClient.Create starting
	I0729 03:31:29.611507    8529 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:31:29.611540    8529 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:29.611550    8529 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:29.611591    8529 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:31:29.611614    8529 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:29.611622    8529 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:29.612054    8529 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:31:29.764908    8529 main.go:141] libmachine: Creating SSH key...
	I0729 03:31:29.832739    8529 main.go:141] libmachine: Creating Disk image...
	I0729 03:31:29.832748    8529 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:31:29.832939    8529 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/disk.qcow2
	I0729 03:31:29.851554    8529 main.go:141] libmachine: STDOUT: 
	I0729 03:31:29.851575    8529 main.go:141] libmachine: STDERR: 
	I0729 03:31:29.851652    8529 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/disk.qcow2 +20000M
	I0729 03:31:29.860345    8529 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:31:29.860367    8529 main.go:141] libmachine: STDERR: 
	I0729 03:31:29.860382    8529 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/disk.qcow2
	I0729 03:31:29.860387    8529 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:31:29.860398    8529 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:31:29.860429    8529 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:24:26:0a:93:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/disk.qcow2
	I0729 03:31:29.862146    8529 main.go:141] libmachine: STDOUT: 
	I0729 03:31:29.862162    8529 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:31:29.862187    8529 client.go:171] duration metric: took 250.761166ms to LocalClient.Create
	I0729 03:31:31.864245    8529 start.go:128] duration metric: took 2.277245875s to createHost
	I0729 03:31:31.864267    8529 start.go:83] releasing machines lock for "offline-docker-405000", held for 2.277327834s
	W0729 03:31:31.864278    8529 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:31:31.869578    8529 out.go:177] * Deleting "offline-docker-405000" in qemu2 ...
	W0729 03:31:31.886613    8529 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:31:31.886625    8529 start.go:729] Will try again in 5 seconds ...
	I0729 03:31:36.888769    8529 start.go:360] acquireMachinesLock for offline-docker-405000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:31:36.889277    8529 start.go:364] duration metric: took 384.667µs to acquireMachinesLock for "offline-docker-405000"
	I0729 03:31:36.889400    8529 start.go:93] Provisioning new machine with config: &{Name:offline-docker-405000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-405000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:31:36.889605    8529 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:31:36.898388    8529 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 03:31:36.949469    8529 start.go:159] libmachine.API.Create for "offline-docker-405000" (driver="qemu2")
	I0729 03:31:36.949522    8529 client.go:168] LocalClient.Create starting
	I0729 03:31:36.949653    8529 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:31:36.949712    8529 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:36.949728    8529 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:36.949825    8529 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:31:36.949876    8529 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:36.949886    8529 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:36.950432    8529 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:31:37.111408    8529 main.go:141] libmachine: Creating SSH key...
	I0729 03:31:37.181396    8529 main.go:141] libmachine: Creating Disk image...
	I0729 03:31:37.181402    8529 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:31:37.181600    8529 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/disk.qcow2
	I0729 03:31:37.190574    8529 main.go:141] libmachine: STDOUT: 
	I0729 03:31:37.190591    8529 main.go:141] libmachine: STDERR: 
	I0729 03:31:37.190636    8529 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/disk.qcow2 +20000M
	I0729 03:31:37.198324    8529 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:31:37.198336    8529 main.go:141] libmachine: STDERR: 
	I0729 03:31:37.198354    8529 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/disk.qcow2
	I0729 03:31:37.198359    8529 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:31:37.198369    8529 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:31:37.198402    8529 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:a1:d7:ae:18:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/offline-docker-405000/disk.qcow2
	I0729 03:31:37.199900    8529 main.go:141] libmachine: STDOUT: 
	I0729 03:31:37.199915    8529 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:31:37.199929    8529 client.go:171] duration metric: took 250.407ms to LocalClient.Create
	I0729 03:31:39.202146    8529 start.go:128] duration metric: took 2.312554042s to createHost
	I0729 03:31:39.202214    8529 start.go:83] releasing machines lock for "offline-docker-405000", held for 2.312959542s
	W0729 03:31:39.202604    8529 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-405000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-405000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:31:39.217070    8529 out.go:177] 
	W0729 03:31:39.221308    8529 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:31:39.221373    8529 out.go:239] * 
	* 
	W0729 03:31:39.224360    8529 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:31:39.239224    8529 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-405000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-29 03:31:39.251294 -0700 PDT m=+739.648182585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-405000 -n offline-docker-405000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-405000 -n offline-docker-405000: exit status 7 (61.607167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-405000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-405000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-405000
--- FAIL: TestOffline (9.93s)

                                                
                                    
x
+
TestAddons/Setup (10.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-797000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-797000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.057414375s)

                                                
                                                
-- stdout --
	* [addons-797000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-797000" primary control-plane node in "addons-797000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:20:04.665137    6957 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:20:04.665257    6957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:20:04.665259    6957 out.go:304] Setting ErrFile to fd 2...
	I0729 03:20:04.665262    6957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:20:04.665425    6957 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:20:04.666500    6957 out.go:298] Setting JSON to false
	I0729 03:20:04.682464    6957 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4773,"bootTime":1722243631,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:20:04.682525    6957 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:20:04.690843    6957 out.go:177] * [addons-797000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:20:04.697836    6957 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:20:04.697882    6957 notify.go:220] Checking for updates...
	I0729 03:20:04.703795    6957 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:20:04.706762    6957 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:20:04.709848    6957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:20:04.712751    6957 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:20:04.715838    6957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:20:04.718857    6957 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:20:04.722806    6957 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:20:04.729726    6957 start.go:297] selected driver: qemu2
	I0729 03:20:04.729735    6957 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:20:04.729744    6957 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:20:04.732108    6957 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:20:04.735819    6957 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:20:04.739857    6957 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:20:04.739879    6957 cni.go:84] Creating CNI manager for ""
	I0729 03:20:04.739887    6957 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:20:04.739894    6957 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:20:04.739931    6957 start.go:340] cluster config:
	{Name:addons-797000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:20:04.743654    6957 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:20:04.750801    6957 out.go:177] * Starting "addons-797000" primary control-plane node in "addons-797000" cluster
	I0729 03:20:04.753760    6957 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:20:04.753777    6957 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:20:04.753791    6957 cache.go:56] Caching tarball of preloaded images
	I0729 03:20:04.753870    6957 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:20:04.753876    6957 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:20:04.754123    6957 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/addons-797000/config.json ...
	I0729 03:20:04.754135    6957 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/addons-797000/config.json: {Name:mk9c12811262389c7f913d816c0596906939904d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:20:04.754573    6957 start.go:360] acquireMachinesLock for addons-797000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:20:04.754640    6957 start.go:364] duration metric: took 60.584µs to acquireMachinesLock for "addons-797000"
	I0729 03:20:04.754652    6957 start.go:93] Provisioning new machine with config: &{Name:addons-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:20:04.754681    6957 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:20:04.759809    6957 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 03:20:04.780099    6957 start.go:159] libmachine.API.Create for "addons-797000" (driver="qemu2")
	I0729 03:20:04.780129    6957 client.go:168] LocalClient.Create starting
	I0729 03:20:04.780237    6957 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:20:04.924896    6957 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:20:05.034365    6957 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:20:05.260924    6957 main.go:141] libmachine: Creating SSH key...
	I0729 03:20:05.298608    6957 main.go:141] libmachine: Creating Disk image...
	I0729 03:20:05.298614    6957 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:20:05.298807    6957 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/disk.qcow2
	I0729 03:20:05.308041    6957 main.go:141] libmachine: STDOUT: 
	I0729 03:20:05.308061    6957 main.go:141] libmachine: STDERR: 
	I0729 03:20:05.308106    6957 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/disk.qcow2 +20000M
	I0729 03:20:05.315975    6957 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:20:05.315989    6957 main.go:141] libmachine: STDERR: 
	I0729 03:20:05.316009    6957 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/disk.qcow2
	I0729 03:20:05.316014    6957 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:20:05.316041    6957 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:20:05.316065    6957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:33:9c:01:b2:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/disk.qcow2
	I0729 03:20:05.317617    6957 main.go:141] libmachine: STDOUT: 
	I0729 03:20:05.317635    6957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:20:05.317652    6957 client.go:171] duration metric: took 537.646083ms to LocalClient.Create
	I0729 03:20:07.319404    6957 start.go:128] duration metric: took 2.565289708s to createHost
	I0729 03:20:07.319465    6957 start.go:83] releasing machines lock for "addons-797000", held for 2.565398334s
	W0729 03:20:07.319562    6957 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:20:07.326769    6957 out.go:177] * Deleting "addons-797000" in qemu2 ...
	W0729 03:20:07.356280    6957 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:20:07.356304    6957 start.go:729] Will try again in 5 seconds ...
	I0729 03:20:12.357546    6957 start.go:360] acquireMachinesLock for addons-797000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:20:12.357997    6957 start.go:364] duration metric: took 332.75µs to acquireMachinesLock for "addons-797000"
	I0729 03:20:12.358111    6957 start.go:93] Provisioning new machine with config: &{Name:addons-797000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:20:12.358454    6957 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:20:12.373183    6957 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 03:20:12.422675    6957 start.go:159] libmachine.API.Create for "addons-797000" (driver="qemu2")
	I0729 03:20:12.422724    6957 client.go:168] LocalClient.Create starting
	I0729 03:20:12.422847    6957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:20:12.422912    6957 main.go:141] libmachine: Decoding PEM data...
	I0729 03:20:12.422926    6957 main.go:141] libmachine: Parsing certificate...
	I0729 03:20:12.423021    6957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:20:12.423077    6957 main.go:141] libmachine: Decoding PEM data...
	I0729 03:20:12.423087    6957 main.go:141] libmachine: Parsing certificate...
	I0729 03:20:12.424115    6957 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:20:12.593886    6957 main.go:141] libmachine: Creating SSH key...
	I0729 03:20:12.627506    6957 main.go:141] libmachine: Creating Disk image...
	I0729 03:20:12.627515    6957 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:20:12.627728    6957 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/disk.qcow2
	I0729 03:20:12.637237    6957 main.go:141] libmachine: STDOUT: 
	I0729 03:20:12.637258    6957 main.go:141] libmachine: STDERR: 
	I0729 03:20:12.637325    6957 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/disk.qcow2 +20000M
	I0729 03:20:12.645135    6957 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:20:12.645160    6957 main.go:141] libmachine: STDERR: 
	I0729 03:20:12.645172    6957 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/disk.qcow2
	I0729 03:20:12.645177    6957 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:20:12.645187    6957 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:20:12.645209    6957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:bb:cf:cb:13:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/addons-797000/disk.qcow2
	I0729 03:20:12.646808    6957 main.go:141] libmachine: STDOUT: 
	I0729 03:20:12.646823    6957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:20:12.646834    6957 client.go:171] duration metric: took 224.139458ms to LocalClient.Create
	I0729 03:20:14.648714    6957 start.go:128] duration metric: took 2.290552291s to createHost
	I0729 03:20:14.648766    6957 start.go:83] releasing machines lock for "addons-797000", held for 2.291082958s
	W0729 03:20:14.649139    6957 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:20:14.660879    6957 out.go:177] 
	W0729 03:20:14.664908    6957 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:20:14.664965    6957 out.go:239] * 
	* 
	W0729 03:20:14.667807    6957 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:20:14.678858    6957 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-797000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.06s)

                                                
                                    
x
+
TestCertOptions (10.06s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-126000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-126000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.801586083s)

                                                
                                                
-- stdout --
	* [cert-options-126000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-126000" primary control-plane node in "cert-options-126000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-126000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-126000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-126000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-126000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-126000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (77.742417ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-126000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-126000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-126000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-126000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-126000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-126000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.622542ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-126000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-126000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-126000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-126000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-126000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-29 03:32:10.137305 -0700 PDT m=+770.534792168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-126000 -n cert-options-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-126000 -n cert-options-126000: exit status 7 (30.322375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-126000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-126000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-126000
--- FAIL: TestCertOptions (10.06s)

                                                
                                    
x
+
TestCertExpiration (195.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-247000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-247000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.849604541s)

                                                
                                                
-- stdout --
	* [cert-expiration-247000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-247000" primary control-plane node in "cert-expiration-247000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-247000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-247000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-247000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-247000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-247000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.236800166s)

                                                
                                                
-- stdout --
	* [cert-expiration-247000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-247000" primary control-plane node in "cert-expiration-247000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-247000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-247000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-247000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-247000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-247000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-247000" primary control-plane node in "cert-expiration-247000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-247000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-247000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-247000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-29 03:35:10.197545 -0700 PDT m=+950.598526085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-247000 -n cert-expiration-247000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-247000 -n cert-expiration-247000: exit status 7 (57.927333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-247000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-247000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-247000
--- FAIL: TestCertExpiration (195.23s)

                                                
                                    
x
+
TestDockerFlags (10.04s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-761000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-761000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.81749275s)

                                                
                                                
-- stdout --
	* [docker-flags-761000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-761000" primary control-plane node in "docker-flags-761000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-761000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:31:50.170370    8720 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:31:50.170503    8720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:31:50.170509    8720 out.go:304] Setting ErrFile to fd 2...
	I0729 03:31:50.170512    8720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:31:50.170643    8720 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:31:50.171739    8720 out.go:298] Setting JSON to false
	I0729 03:31:50.188032    8720 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5479,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:31:50.188101    8720 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:31:50.194904    8720 out.go:177] * [docker-flags-761000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:31:50.200737    8720 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:31:50.200821    8720 notify.go:220] Checking for updates...
	I0729 03:31:50.209665    8720 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:31:50.212734    8720 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:31:50.215725    8720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:31:50.218635    8720 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:31:50.221716    8720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:31:50.225007    8720 config.go:182] Loaded profile config "force-systemd-flag-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:31:50.225073    8720 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:31:50.225119    8720 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:31:50.228705    8720 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:31:50.235739    8720 start.go:297] selected driver: qemu2
	I0729 03:31:50.235746    8720 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:31:50.235753    8720 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:31:50.238220    8720 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:31:50.239633    8720 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:31:50.243747    8720 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0729 03:31:50.243775    8720 cni.go:84] Creating CNI manager for ""
	I0729 03:31:50.243783    8720 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:31:50.243788    8720 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:31:50.243820    8720 start.go:340] cluster config:
	{Name:docker-flags-761000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:31:50.247795    8720 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:31:50.256705    8720 out.go:177] * Starting "docker-flags-761000" primary control-plane node in "docker-flags-761000" cluster
	I0729 03:31:50.260698    8720 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:31:50.260711    8720 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:31:50.260719    8720 cache.go:56] Caching tarball of preloaded images
	I0729 03:31:50.260773    8720 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:31:50.260779    8720 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:31:50.260832    8720 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/docker-flags-761000/config.json ...
	I0729 03:31:50.260844    8720 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/docker-flags-761000/config.json: {Name:mkba4d373c9c0f6b4f5ddd141b490c8d162e8571 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:31:50.261070    8720 start.go:360] acquireMachinesLock for docker-flags-761000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:31:50.261108    8720 start.go:364] duration metric: took 30µs to acquireMachinesLock for "docker-flags-761000"
	I0729 03:31:50.261120    8720 start.go:93] Provisioning new machine with config: &{Name:docker-flags-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:31:50.261149    8720 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:31:50.268721    8720 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 03:31:50.286996    8720 start.go:159] libmachine.API.Create for "docker-flags-761000" (driver="qemu2")
	I0729 03:31:50.287022    8720 client.go:168] LocalClient.Create starting
	I0729 03:31:50.287085    8720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:31:50.287117    8720 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:50.287130    8720 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:50.287167    8720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:31:50.287190    8720 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:50.287198    8720 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:50.287545    8720 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:31:50.434580    8720 main.go:141] libmachine: Creating SSH key...
	I0729 03:31:50.534673    8720 main.go:141] libmachine: Creating Disk image...
	I0729 03:31:50.534678    8720 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:31:50.534912    8720 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/disk.qcow2
	I0729 03:31:50.544400    8720 main.go:141] libmachine: STDOUT: 
	I0729 03:31:50.544417    8720 main.go:141] libmachine: STDERR: 
	I0729 03:31:50.544473    8720 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/disk.qcow2 +20000M
	I0729 03:31:50.552212    8720 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:31:50.552236    8720 main.go:141] libmachine: STDERR: 
	I0729 03:31:50.552251    8720 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/disk.qcow2
	I0729 03:31:50.552255    8720 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:31:50.552263    8720 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:31:50.552288    8720 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:f9:cf:69:6f:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/disk.qcow2
	I0729 03:31:50.553957    8720 main.go:141] libmachine: STDOUT: 
	I0729 03:31:50.553971    8720 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:31:50.553990    8720 client.go:171] duration metric: took 266.968459ms to LocalClient.Create
	I0729 03:31:52.556099    8720 start.go:128] duration metric: took 2.294979833s to createHost
	I0729 03:31:52.556167    8720 start.go:83] releasing machines lock for "docker-flags-761000", held for 2.295091375s
	W0729 03:31:52.556256    8720 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:31:52.578221    8720 out.go:177] * Deleting "docker-flags-761000" in qemu2 ...
	W0729 03:31:52.599545    8720 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:31:52.599562    8720 start.go:729] Will try again in 5 seconds ...
	I0729 03:31:57.601748    8720 start.go:360] acquireMachinesLock for docker-flags-761000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:31:57.602223    8720 start.go:364] duration metric: took 372.209µs to acquireMachinesLock for "docker-flags-761000"
	I0729 03:31:57.602339    8720 start.go:93] Provisioning new machine with config: &{Name:docker-flags-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:31:57.602690    8720 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:31:57.612086    8720 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 03:31:57.663797    8720 start.go:159] libmachine.API.Create for "docker-flags-761000" (driver="qemu2")
	I0729 03:31:57.663845    8720 client.go:168] LocalClient.Create starting
	I0729 03:31:57.663963    8720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:31:57.664027    8720 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:57.664043    8720 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:57.664113    8720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:31:57.664157    8720 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:57.664169    8720 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:57.664750    8720 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:31:57.832608    8720 main.go:141] libmachine: Creating SSH key...
	I0729 03:31:57.894785    8720 main.go:141] libmachine: Creating Disk image...
	I0729 03:31:57.894790    8720 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:31:57.894989    8720 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/disk.qcow2
	I0729 03:31:57.904425    8720 main.go:141] libmachine: STDOUT: 
	I0729 03:31:57.904445    8720 main.go:141] libmachine: STDERR: 
	I0729 03:31:57.904505    8720 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/disk.qcow2 +20000M
	I0729 03:31:57.912249    8720 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:31:57.912264    8720 main.go:141] libmachine: STDERR: 
	I0729 03:31:57.912274    8720 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/disk.qcow2
	I0729 03:31:57.912279    8720 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:31:57.912290    8720 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:31:57.912324    8720 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:be:4f:51:e9:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/docker-flags-761000/disk.qcow2
	I0729 03:31:57.913937    8720 main.go:141] libmachine: STDOUT: 
	I0729 03:31:57.913952    8720 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:31:57.913965    8720 client.go:171] duration metric: took 250.119333ms to LocalClient.Create
	I0729 03:31:59.916128    8720 start.go:128] duration metric: took 2.313398292s to createHost
	I0729 03:31:59.916178    8720 start.go:83] releasing machines lock for "docker-flags-761000", held for 2.313976375s
	W0729 03:31:59.916500    8720 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-761000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-761000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:31:59.931139    8720 out.go:177] 
	W0729 03:31:59.935211    8720 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:31:59.935234    8720 out.go:239] * 
	* 
	W0729 03:31:59.938035    8720 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:31:59.945096    8720 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-761000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-761000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-761000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (75.95575ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-761000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-761000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-761000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-761000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-761000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-761000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-761000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-761000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-761000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (41.620583ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-761000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-761000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-761000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-761000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-761000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-761000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-29 03:32:00.080137 -0700 PDT m=+760.477429335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-761000 -n docker-flags-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-761000 -n docker-flags-761000: exit status 7 (28.69675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-761000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-761000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-761000
--- FAIL: TestDockerFlags (10.04s)

                                                
                                    
x
+
TestForceSystemdFlag (10.04s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-201000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-201000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.841303208s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-201000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-201000" primary control-plane node in "force-systemd-flag-201000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-201000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:31:45.095916    8699 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:31:45.096050    8699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:31:45.096054    8699 out.go:304] Setting ErrFile to fd 2...
	I0729 03:31:45.096057    8699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:31:45.096178    8699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:31:45.097266    8699 out.go:298] Setting JSON to false
	I0729 03:31:45.113422    8699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5474,"bootTime":1722243631,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:31:45.113498    8699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:31:45.118336    8699 out.go:177] * [force-systemd-flag-201000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:31:45.126143    8699 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:31:45.126202    8699 notify.go:220] Checking for updates...
	I0729 03:31:45.136227    8699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:31:45.140115    8699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:31:45.143221    8699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:31:45.147198    8699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:31:45.150227    8699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:31:45.153483    8699 config.go:182] Loaded profile config "force-systemd-env-814000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:31:45.153573    8699 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:31:45.153632    8699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:31:45.158182    8699 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:31:45.165186    8699 start.go:297] selected driver: qemu2
	I0729 03:31:45.165195    8699 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:31:45.165202    8699 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:31:45.167590    8699 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:31:45.170205    8699 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:31:45.173238    8699 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 03:31:45.173249    8699 cni.go:84] Creating CNI manager for ""
	I0729 03:31:45.173263    8699 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:31:45.173267    8699 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:31:45.173288    8699 start.go:340] cluster config:
	{Name:force-systemd-flag-201000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-201000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:31:45.177040    8699 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:31:45.184036    8699 out.go:177] * Starting "force-systemd-flag-201000" primary control-plane node in "force-systemd-flag-201000" cluster
	I0729 03:31:45.188253    8699 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:31:45.188270    8699 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:31:45.188281    8699 cache.go:56] Caching tarball of preloaded images
	I0729 03:31:45.188365    8699 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:31:45.188371    8699 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:31:45.188434    8699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/force-systemd-flag-201000/config.json ...
	I0729 03:31:45.188445    8699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/force-systemd-flag-201000/config.json: {Name:mk81537763fbc65007beb50db9af1a93f53333a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:31:45.188838    8699 start.go:360] acquireMachinesLock for force-systemd-flag-201000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:31:45.188880    8699 start.go:364] duration metric: took 29.333µs to acquireMachinesLock for "force-systemd-flag-201000"
	I0729 03:31:45.188893    8699 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-201000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-201000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:31:45.188924    8699 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:31:45.194152    8699 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 03:31:45.212072    8699 start.go:159] libmachine.API.Create for "force-systemd-flag-201000" (driver="qemu2")
	I0729 03:31:45.212097    8699 client.go:168] LocalClient.Create starting
	I0729 03:31:45.212154    8699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:31:45.212188    8699 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:45.212197    8699 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:45.212244    8699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:31:45.212272    8699 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:45.212282    8699 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:45.212674    8699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:31:45.364068    8699 main.go:141] libmachine: Creating SSH key...
	I0729 03:31:45.417787    8699 main.go:141] libmachine: Creating Disk image...
	I0729 03:31:45.417796    8699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:31:45.417997    8699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/disk.qcow2
	I0729 03:31:45.427110    8699 main.go:141] libmachine: STDOUT: 
	I0729 03:31:45.427128    8699 main.go:141] libmachine: STDERR: 
	I0729 03:31:45.427177    8699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/disk.qcow2 +20000M
	I0729 03:31:45.435017    8699 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:31:45.435029    8699 main.go:141] libmachine: STDERR: 
	I0729 03:31:45.435058    8699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/disk.qcow2
	I0729 03:31:45.435066    8699 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:31:45.435080    8699 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:31:45.435109    8699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:ab:54:5b:48:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/disk.qcow2
	I0729 03:31:45.436681    8699 main.go:141] libmachine: STDOUT: 
	I0729 03:31:45.436695    8699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:31:45.436713    8699 client.go:171] duration metric: took 224.616791ms to LocalClient.Create
	I0729 03:31:47.438846    8699 start.go:128] duration metric: took 2.249945208s to createHost
	I0729 03:31:47.438889    8699 start.go:83] releasing machines lock for "force-systemd-flag-201000", held for 2.250039583s
	W0729 03:31:47.438949    8699 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:31:47.452998    8699 out.go:177] * Deleting "force-systemd-flag-201000" in qemu2 ...
	W0729 03:31:47.475006    8699 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:31:47.475027    8699 start.go:729] Will try again in 5 seconds ...
	I0729 03:31:52.477185    8699 start.go:360] acquireMachinesLock for force-systemd-flag-201000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:31:52.556264    8699 start.go:364] duration metric: took 78.876625ms to acquireMachinesLock for "force-systemd-flag-201000"
	I0729 03:31:52.556448    8699 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-201000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-201000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:31:52.556746    8699 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:31:52.566220    8699 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 03:31:52.617413    8699 start.go:159] libmachine.API.Create for "force-systemd-flag-201000" (driver="qemu2")
	I0729 03:31:52.617464    8699 client.go:168] LocalClient.Create starting
	I0729 03:31:52.617599    8699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:31:52.617660    8699 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:52.617677    8699 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:52.617745    8699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:31:52.617789    8699 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:52.617801    8699 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:52.618378    8699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:31:52.776350    8699 main.go:141] libmachine: Creating SSH key...
	I0729 03:31:52.834077    8699 main.go:141] libmachine: Creating Disk image...
	I0729 03:31:52.834086    8699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:31:52.834322    8699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/disk.qcow2
	I0729 03:31:52.843544    8699 main.go:141] libmachine: STDOUT: 
	I0729 03:31:52.843564    8699 main.go:141] libmachine: STDERR: 
	I0729 03:31:52.843613    8699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/disk.qcow2 +20000M
	I0729 03:31:52.851534    8699 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:31:52.851548    8699 main.go:141] libmachine: STDERR: 
	I0729 03:31:52.851571    8699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/disk.qcow2
	I0729 03:31:52.851576    8699 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:31:52.851587    8699 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:31:52.851616    8699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:35:b2:d8:b7:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-flag-201000/disk.qcow2
	I0729 03:31:52.853191    8699 main.go:141] libmachine: STDOUT: 
	I0729 03:31:52.853209    8699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:31:52.853222    8699 client.go:171] duration metric: took 235.758208ms to LocalClient.Create
	I0729 03:31:54.855364    8699 start.go:128] duration metric: took 2.298637458s to createHost
	I0729 03:31:54.855522    8699 start.go:83] releasing machines lock for "force-systemd-flag-201000", held for 2.299195459s
	W0729 03:31:54.855873    8699 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-201000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-201000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:31:54.871451    8699 out.go:177] 
	W0729 03:31:54.881813    8699 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:31:54.881849    8699 out.go:239] * 
	* 
	W0729 03:31:54.884269    8699 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:31:54.897445    8699 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-201000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-201000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-201000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (90.060541ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-201000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-201000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-201000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-29 03:31:55.003773 -0700 PDT m=+755.400967001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-201000 -n force-systemd-flag-201000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-201000 -n force-systemd-flag-201000: exit status 7 (35.304083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-201000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-201000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-201000
--- FAIL: TestForceSystemdFlag (10.04s)

                                                
                                    
x
+
TestForceSystemdEnv (10.75s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-814000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-814000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.555234209s)

                                                
                                                
-- stdout --
	* [force-systemd-env-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-814000" primary control-plane node in "force-systemd-env-814000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-814000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:31:39.422938    8667 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:31:39.423106    8667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:31:39.423112    8667 out.go:304] Setting ErrFile to fd 2...
	I0729 03:31:39.423114    8667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:31:39.423259    8667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:31:39.424337    8667 out.go:298] Setting JSON to false
	I0729 03:31:39.440804    8667 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5468,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:31:39.440867    8667 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:31:39.446979    8667 out.go:177] * [force-systemd-env-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:31:39.455019    8667 notify.go:220] Checking for updates...
	I0729 03:31:39.459977    8667 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:31:39.467968    8667 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:31:39.475792    8667 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:31:39.485018    8667 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:31:39.492933    8667 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:31:39.499923    8667 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0729 03:31:39.504289    8667 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:31:39.504348    8667 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:31:39.508935    8667 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:31:39.515951    8667 start.go:297] selected driver: qemu2
	I0729 03:31:39.515956    8667 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:31:39.515961    8667 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:31:39.518208    8667 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:31:39.521920    8667 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:31:39.526010    8667 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 03:31:39.526043    8667 cni.go:84] Creating CNI manager for ""
	I0729 03:31:39.526051    8667 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:31:39.526056    8667 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:31:39.526082    8667 start.go:340] cluster config:
	{Name:force-systemd-env-814000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:31:39.529679    8667 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:31:39.536909    8667 out.go:177] * Starting "force-systemd-env-814000" primary control-plane node in "force-systemd-env-814000" cluster
	I0729 03:31:39.541000    8667 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:31:39.541028    8667 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:31:39.541039    8667 cache.go:56] Caching tarball of preloaded images
	I0729 03:31:39.541100    8667 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:31:39.541106    8667 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:31:39.541162    8667 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/force-systemd-env-814000/config.json ...
	I0729 03:31:39.541173    8667 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/force-systemd-env-814000/config.json: {Name:mkdaffcebfe36abeb03c91846d5d984177286090 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:31:39.541378    8667 start.go:360] acquireMachinesLock for force-systemd-env-814000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:31:39.541411    8667 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "force-systemd-env-814000"
	I0729 03:31:39.541423    8667 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:31:39.541460    8667 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:31:39.549992    8667 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 03:31:39.566123    8667 start.go:159] libmachine.API.Create for "force-systemd-env-814000" (driver="qemu2")
	I0729 03:31:39.566144    8667 client.go:168] LocalClient.Create starting
	I0729 03:31:39.566209    8667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:31:39.566238    8667 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:39.566248    8667 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:39.566286    8667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:31:39.566308    8667 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:39.566316    8667 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:39.566642    8667 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:31:39.719286    8667 main.go:141] libmachine: Creating SSH key...
	I0729 03:31:39.897179    8667 main.go:141] libmachine: Creating Disk image...
	I0729 03:31:39.897188    8667 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:31:39.897376    8667 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/disk.qcow2
	I0729 03:31:39.907126    8667 main.go:141] libmachine: STDOUT: 
	I0729 03:31:39.907149    8667 main.go:141] libmachine: STDERR: 
	I0729 03:31:39.907218    8667 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/disk.qcow2 +20000M
	I0729 03:31:39.916373    8667 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:31:39.916402    8667 main.go:141] libmachine: STDERR: 
	I0729 03:31:39.916425    8667 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/disk.qcow2
	I0729 03:31:39.916429    8667 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:31:39.916442    8667 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:31:39.916472    8667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:3e:65:5b:09:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/disk.qcow2
	I0729 03:31:39.918495    8667 main.go:141] libmachine: STDOUT: 
	I0729 03:31:39.918511    8667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:31:39.918531    8667 client.go:171] duration metric: took 352.389708ms to LocalClient.Create
	I0729 03:31:41.920710    8667 start.go:128] duration metric: took 2.379266333s to createHost
	I0729 03:31:41.920782    8667 start.go:83] releasing machines lock for "force-systemd-env-814000", held for 2.379406375s
	W0729 03:31:41.920842    8667 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:31:41.929969    8667 out.go:177] * Deleting "force-systemd-env-814000" in qemu2 ...
	W0729 03:31:41.959004    8667 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:31:41.959031    8667 start.go:729] Will try again in 5 seconds ...
	I0729 03:31:46.961107    8667 start.go:360] acquireMachinesLock for force-systemd-env-814000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:31:47.439016    8667 start.go:364] duration metric: took 477.829334ms to acquireMachinesLock for "force-systemd-env-814000"
	I0729 03:31:47.439163    8667 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-814000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-814000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:31:47.439502    8667 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:31:47.444154    8667 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 03:31:47.493629    8667 start.go:159] libmachine.API.Create for "force-systemd-env-814000" (driver="qemu2")
	I0729 03:31:47.493676    8667 client.go:168] LocalClient.Create starting
	I0729 03:31:47.493822    8667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:31:47.493898    8667 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:47.493920    8667 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:47.493987    8667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:31:47.494034    8667 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:47.494049    8667 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:47.494681    8667 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:31:47.651673    8667 main.go:141] libmachine: Creating SSH key...
	I0729 03:31:47.876952    8667 main.go:141] libmachine: Creating Disk image...
	I0729 03:31:47.876965    8667 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:31:47.877214    8667 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/disk.qcow2
	I0729 03:31:47.886855    8667 main.go:141] libmachine: STDOUT: 
	I0729 03:31:47.886874    8667 main.go:141] libmachine: STDERR: 
	I0729 03:31:47.886924    8667 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/disk.qcow2 +20000M
	I0729 03:31:47.894752    8667 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:31:47.894766    8667 main.go:141] libmachine: STDERR: 
	I0729 03:31:47.894776    8667 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/disk.qcow2
	I0729 03:31:47.894782    8667 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:31:47.894792    8667 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:31:47.894827    8667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:23:19:9b:c0:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/force-systemd-env-814000/disk.qcow2
	I0729 03:31:47.896436    8667 main.go:141] libmachine: STDOUT: 
	I0729 03:31:47.896451    8667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:31:47.896463    8667 client.go:171] duration metric: took 402.78975ms to LocalClient.Create
	I0729 03:31:49.898694    8667 start.go:128] duration metric: took 2.459155625s to createHost
	I0729 03:31:49.898797    8667 start.go:83] releasing machines lock for "force-systemd-env-814000", held for 2.459776042s
	W0729 03:31:49.899189    8667 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-814000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-814000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:31:49.914863    8667 out.go:177] 
	W0729 03:31:49.922856    8667 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:31:49.922877    8667 out.go:239] * 
	* 
	W0729 03:31:49.924765    8667 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:31:49.933552    8667 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-814000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-814000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-814000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.776584ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-814000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-814000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-29 03:31:50.030805 -0700 PDT m=+750.427902251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-814000 -n force-systemd-env-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-814000 -n force-systemd-env-814000: exit status 7 (32.4925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-814000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-814000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-814000
--- FAIL: TestForceSystemdEnv (10.75s)

                                                
                                    
x
+
TestErrorSpam/setup (10.02s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-284000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-284000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 --driver=qemu2 : exit status 80 (10.015324709s)

                                                
                                                
-- stdout --
	* [nospam-284000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-284000" primary control-plane node in "nospam-284000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-284000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-284000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-284000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-284000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-284000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19337
- KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-284000" primary control-plane node in "nospam-284000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-284000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-284000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (10.02s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-568000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-568000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.923585s)

                                                
                                                
-- stdout --
	* [functional-568000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-568000" primary control-plane node in "functional-568000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-568000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51074 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51074 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51074 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-568000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-568000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-568000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19337
- KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-568000" primary control-plane node in "functional-568000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-568000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51074 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51074 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51074 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-568000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (67.537834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.99s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-568000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-568000 --alsologtostderr -v=8: exit status 80 (5.185881791s)

                                                
                                                
-- stdout --
	* [functional-568000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-568000" primary control-plane node in "functional-568000" cluster
	* Restarting existing qemu2 VM for "functional-568000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-568000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:20:45.525311    7103 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:20:45.525429    7103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:20:45.525432    7103 out.go:304] Setting ErrFile to fd 2...
	I0729 03:20:45.525435    7103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:20:45.525581    7103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:20:45.526591    7103 out.go:298] Setting JSON to false
	I0729 03:20:45.543072    7103 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4814,"bootTime":1722243631,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:20:45.543136    7103 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:20:45.548332    7103 out.go:177] * [functional-568000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:20:45.555254    7103 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:20:45.555260    7103 notify.go:220] Checking for updates...
	I0729 03:20:45.561220    7103 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:20:45.565167    7103 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:20:45.569204    7103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:20:45.570625    7103 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:20:45.574133    7103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:20:45.577424    7103 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:20:45.577472    7103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:20:45.582060    7103 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:20:45.589191    7103 start.go:297] selected driver: qemu2
	I0729 03:20:45.589195    7103 start.go:901] validating driver "qemu2" against &{Name:functional-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:20:45.589243    7103 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:20:45.591546    7103 cni.go:84] Creating CNI manager for ""
	I0729 03:20:45.591562    7103 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:20:45.591601    7103 start.go:340] cluster config:
	{Name:functional-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-568000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:20:45.595032    7103 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:20:45.602173    7103 out.go:177] * Starting "functional-568000" primary control-plane node in "functional-568000" cluster
	I0729 03:20:45.606240    7103 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:20:45.606257    7103 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:20:45.606268    7103 cache.go:56] Caching tarball of preloaded images
	I0729 03:20:45.606328    7103 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:20:45.606337    7103 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:20:45.606396    7103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/functional-568000/config.json ...
	I0729 03:20:45.606888    7103 start.go:360] acquireMachinesLock for functional-568000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:20:45.606917    7103 start.go:364] duration metric: took 23.167µs to acquireMachinesLock for "functional-568000"
	I0729 03:20:45.606927    7103 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:20:45.606933    7103 fix.go:54] fixHost starting: 
	I0729 03:20:45.607050    7103 fix.go:112] recreateIfNeeded on functional-568000: state=Stopped err=<nil>
	W0729 03:20:45.607060    7103 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:20:45.614180    7103 out.go:177] * Restarting existing qemu2 VM for "functional-568000" ...
	I0729 03:20:45.618184    7103 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:20:45.618232    7103 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:63:62:f4:f5:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/disk.qcow2
	I0729 03:20:45.620392    7103 main.go:141] libmachine: STDOUT: 
	I0729 03:20:45.620414    7103 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:20:45.620444    7103 fix.go:56] duration metric: took 13.510833ms for fixHost
	I0729 03:20:45.620450    7103 start.go:83] releasing machines lock for "functional-568000", held for 13.528833ms
	W0729 03:20:45.620456    7103 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:20:45.620500    7103 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:20:45.620509    7103 start.go:729] Will try again in 5 seconds ...
	I0729 03:20:50.622510    7103 start.go:360] acquireMachinesLock for functional-568000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:20:50.622798    7103 start.go:364] duration metric: took 224.916µs to acquireMachinesLock for "functional-568000"
	I0729 03:20:50.622881    7103 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:20:50.622894    7103 fix.go:54] fixHost starting: 
	I0729 03:20:50.623308    7103 fix.go:112] recreateIfNeeded on functional-568000: state=Stopped err=<nil>
	W0729 03:20:50.623327    7103 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:20:50.631781    7103 out.go:177] * Restarting existing qemu2 VM for "functional-568000" ...
	I0729 03:20:50.635820    7103 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:20:50.636088    7103 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:63:62:f4:f5:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/disk.qcow2
	I0729 03:20:50.646154    7103 main.go:141] libmachine: STDOUT: 
	I0729 03:20:50.646217    7103 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:20:50.646284    7103 fix.go:56] duration metric: took 23.390584ms for fixHost
	I0729 03:20:50.646303    7103 start.go:83] releasing machines lock for "functional-568000", held for 23.490875ms
	W0729 03:20:50.646478    7103 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-568000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-568000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:20:50.653726    7103 out.go:177] 
	W0729 03:20:50.656771    7103 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:20:50.656803    7103 out.go:239] * 
	* 
	W0729 03:20:50.659414    7103 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:20:50.667753    7103 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-568000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.187591792s for "functional-568000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (69.527541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (31.229417ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-568000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (29.2185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-568000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-568000 get po -A: exit status 1 (26.198583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-568000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-568000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-568000\n"*: args "kubectl --context functional-568000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-568000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (29.381ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh sudo crictl images: exit status 83 (41.683083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-568000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (39.915916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-568000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.003458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.781542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-568000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 kubectl -- --context functional-568000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 kubectl -- --context functional-568000 get pods: exit status 1 (707.882042ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-568000
	* no server found for cluster "functional-568000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-568000 kubectl -- --context functional-568000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (31.607042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-568000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-568000 get pods: exit status 1 (946.4495ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-568000
	* no server found for cluster "functional-568000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-568000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (29.657625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.98s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-568000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-568000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.185147583s)

                                                
                                                
-- stdout --
	* [functional-568000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-568000" primary control-plane node in "functional-568000" cluster
	* Restarting existing qemu2 VM for "functional-568000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-568000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-568000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-568000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.185587875s for "functional-568000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (65.294834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-568000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-568000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (30.444166ms)

                                                
                                                
** stderr ** 
	error: context "functional-568000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-568000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (29.140542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 logs: exit status 83 (77.6645ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-462000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
	|         | -p download-only-462000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| delete  | -p download-only-462000                                                  | download-only-462000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| start   | -o=json --download-only                                                  | download-only-278000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
	|         | -p download-only-278000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| delete  | -p download-only-278000                                                  | download-only-278000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| start   | -o=json --download-only                                                  | download-only-881000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
	|         | -p download-only-881000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	| delete  | -p download-only-881000                                                  | download-only-881000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	| delete  | -p download-only-462000                                                  | download-only-462000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	| delete  | -p download-only-278000                                                  | download-only-278000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	| delete  | -p download-only-881000                                                  | download-only-881000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	| start   | --download-only -p                                                       | binary-mirror-847000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | binary-mirror-847000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51039                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-847000                                                  | binary-mirror-847000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	| addons  | enable dashboard -p                                                      | addons-797000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | addons-797000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-797000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | addons-797000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-797000 --wait=true                                             | addons-797000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-797000                                                         | addons-797000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	| start   | -p nospam-284000 -n=1 --memory=2250 --wait=false                         | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-284000                                                         | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	| start   | -p functional-568000                                                     | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-568000                                                     | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-568000 cache add                                              | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-568000 cache add                                              | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-568000 cache add                                              | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-568000 cache add                                              | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	|         | minikube-local-cache-test:functional-568000                              |                      |         |         |                     |                     |
	| cache   | functional-568000 cache delete                                           | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	|         | minikube-local-cache-test:functional-568000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	| ssh     | functional-568000 ssh sudo                                               | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-568000                                                        | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-568000 ssh                                                    | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-568000 cache reload                                           | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	| ssh     | functional-568000 ssh                                                    | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-568000 kubectl --                                             | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | --context functional-568000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-568000                                                     | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 03:20:55
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 03:20:55.830260    7181 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:20:55.830380    7181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:20:55.830382    7181 out.go:304] Setting ErrFile to fd 2...
	I0729 03:20:55.830384    7181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:20:55.830516    7181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:20:55.831545    7181 out.go:298] Setting JSON to false
	I0729 03:20:55.847763    7181 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4824,"bootTime":1722243631,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:20:55.847842    7181 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:20:55.855601    7181 out.go:177] * [functional-568000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:20:55.864560    7181 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:20:55.864617    7181 notify.go:220] Checking for updates...
	I0729 03:20:55.874480    7181 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:20:55.877508    7181 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:20:55.878906    7181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:20:55.881525    7181 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:20:55.884509    7181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:20:55.887772    7181 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:20:55.887826    7181 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:20:55.892498    7181 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:20:55.899476    7181 start.go:297] selected driver: qemu2
	I0729 03:20:55.899481    7181 start.go:901] validating driver "qemu2" against &{Name:functional-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:20:55.899529    7181 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:20:55.902003    7181 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:20:55.902022    7181 cni.go:84] Creating CNI manager for ""
	I0729 03:20:55.902030    7181 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:20:55.902084    7181 start.go:340] cluster config:
	{Name:functional-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-568000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:20:55.905716    7181 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:20:55.914484    7181 out.go:177] * Starting "functional-568000" primary control-plane node in "functional-568000" cluster
	I0729 03:20:55.918436    7181 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:20:55.918452    7181 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:20:55.918462    7181 cache.go:56] Caching tarball of preloaded images
	I0729 03:20:55.918519    7181 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:20:55.918523    7181 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:20:55.918602    7181 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/functional-568000/config.json ...
	I0729 03:20:55.919060    7181 start.go:360] acquireMachinesLock for functional-568000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:20:55.919098    7181 start.go:364] duration metric: took 30.583µs to acquireMachinesLock for "functional-568000"
	I0729 03:20:55.919106    7181 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:20:55.919112    7181 fix.go:54] fixHost starting: 
	I0729 03:20:55.919230    7181 fix.go:112] recreateIfNeeded on functional-568000: state=Stopped err=<nil>
	W0729 03:20:55.919236    7181 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:20:55.926516    7181 out.go:177] * Restarting existing qemu2 VM for "functional-568000" ...
	I0729 03:20:55.930515    7181 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:20:55.930559    7181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:63:62:f4:f5:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/disk.qcow2
	I0729 03:20:55.932525    7181 main.go:141] libmachine: STDOUT: 
	I0729 03:20:55.932540    7181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:20:55.932566    7181 fix.go:56] duration metric: took 13.455ms for fixHost
	I0729 03:20:55.932569    7181 start.go:83] releasing machines lock for "functional-568000", held for 13.469375ms
	W0729 03:20:55.932574    7181 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:20:55.932613    7181 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:20:55.932618    7181 start.go:729] Will try again in 5 seconds ...
	I0729 03:21:00.934717    7181 start.go:360] acquireMachinesLock for functional-568000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:21:00.935083    7181 start.go:364] duration metric: took 292.875µs to acquireMachinesLock for "functional-568000"
	I0729 03:21:00.935187    7181 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:21:00.935197    7181 fix.go:54] fixHost starting: 
	I0729 03:21:00.935795    7181 fix.go:112] recreateIfNeeded on functional-568000: state=Stopped err=<nil>
	W0729 03:21:00.935813    7181 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:21:00.944276    7181 out.go:177] * Restarting existing qemu2 VM for "functional-568000" ...
	I0729 03:21:00.947207    7181 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:21:00.947412    7181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:63:62:f4:f5:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/disk.qcow2
	I0729 03:21:00.955034    7181 main.go:141] libmachine: STDOUT: 
	I0729 03:21:00.955083    7181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:21:00.955155    7181 fix.go:56] duration metric: took 19.958875ms for fixHost
	I0729 03:21:00.955169    7181 start.go:83] releasing machines lock for "functional-568000", held for 20.073166ms
	W0729 03:21:00.955296    7181 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-568000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:21:00.961293    7181 out.go:177] 
	W0729 03:21:00.965278    7181 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:21:00.965298    7181 out.go:239] * 
	W0729 03:21:00.967717    7181 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:21:00.977183    7181 out.go:177] 
	
	
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-568000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-462000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
|         | -p download-only-462000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
| delete  | -p download-only-462000                                                  | download-only-462000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
| start   | -o=json --download-only                                                  | download-only-278000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
|         | -p download-only-278000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
| delete  | -p download-only-278000                                                  | download-only-278000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
| start   | -o=json --download-only                                                  | download-only-881000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
|         | -p download-only-881000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| delete  | -p download-only-881000                                                  | download-only-881000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| delete  | -p download-only-462000                                                  | download-only-462000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| delete  | -p download-only-278000                                                  | download-only-278000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| delete  | -p download-only-881000                                                  | download-only-881000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| start   | --download-only -p                                                       | binary-mirror-847000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | binary-mirror-847000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51039                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-847000                                                  | binary-mirror-847000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| addons  | enable dashboard -p                                                      | addons-797000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | addons-797000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-797000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | addons-797000                                                            |                      |         |         |                     |                     |
| start   | -p addons-797000 --wait=true                                             | addons-797000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-797000                                                         | addons-797000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| start   | -p nospam-284000 -n=1 --memory=2250 --wait=false                         | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-284000                                                         | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| start   | -p functional-568000                                                     | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-568000                                                     | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-568000 cache add                                              | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-568000 cache add                                              | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-568000 cache add                                              | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-568000 cache add                                              | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | minikube-local-cache-test:functional-568000                              |                      |         |         |                     |                     |
| cache   | functional-568000 cache delete                                           | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | minikube-local-cache-test:functional-568000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| ssh     | functional-568000 ssh sudo                                               | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-568000                                                        | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-568000 ssh                                                    | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-568000 cache reload                                           | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| ssh     | functional-568000 ssh                                                    | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-568000 kubectl --                                             | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | --context functional-568000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-568000                                                     | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/29 03:20:55
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0729 03:20:55.830260    7181 out.go:291] Setting OutFile to fd 1 ...
I0729 03:20:55.830380    7181 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:20:55.830382    7181 out.go:304] Setting ErrFile to fd 2...
I0729 03:20:55.830384    7181 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:20:55.830516    7181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
I0729 03:20:55.831545    7181 out.go:298] Setting JSON to false
I0729 03:20:55.847763    7181 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4824,"bootTime":1722243631,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0729 03:20:55.847842    7181 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0729 03:20:55.855601    7181 out.go:177] * [functional-568000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0729 03:20:55.864560    7181 out.go:177]   - MINIKUBE_LOCATION=19337
I0729 03:20:55.864617    7181 notify.go:220] Checking for updates...
I0729 03:20:55.874480    7181 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
I0729 03:20:55.877508    7181 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0729 03:20:55.878906    7181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0729 03:20:55.881525    7181 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
I0729 03:20:55.884509    7181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0729 03:20:55.887772    7181 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:20:55.887826    7181 driver.go:392] Setting default libvirt URI to qemu:///system
I0729 03:20:55.892498    7181 out.go:177] * Using the qemu2 driver based on existing profile
I0729 03:20:55.899476    7181 start.go:297] selected driver: qemu2
I0729 03:20:55.899481    7181 start.go:901] validating driver "qemu2" against &{Name:functional-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 03:20:55.899529    7181 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0729 03:20:55.902003    7181 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0729 03:20:55.902022    7181 cni.go:84] Creating CNI manager for ""
I0729 03:20:55.902030    7181 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0729 03:20:55.902084    7181 start.go:340] cluster config:
{Name:functional-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-568000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 03:20:55.905716    7181 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0729 03:20:55.914484    7181 out.go:177] * Starting "functional-568000" primary control-plane node in "functional-568000" cluster
I0729 03:20:55.918436    7181 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0729 03:20:55.918452    7181 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0729 03:20:55.918462    7181 cache.go:56] Caching tarball of preloaded images
I0729 03:20:55.918519    7181 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0729 03:20:55.918523    7181 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0729 03:20:55.918602    7181 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/functional-568000/config.json ...
I0729 03:20:55.919060    7181 start.go:360] acquireMachinesLock for functional-568000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 03:20:55.919098    7181 start.go:364] duration metric: took 30.583µs to acquireMachinesLock for "functional-568000"
I0729 03:20:55.919106    7181 start.go:96] Skipping create...Using existing machine configuration
I0729 03:20:55.919112    7181 fix.go:54] fixHost starting: 
I0729 03:20:55.919230    7181 fix.go:112] recreateIfNeeded on functional-568000: state=Stopped err=<nil>
W0729 03:20:55.919236    7181 fix.go:138] unexpected machine state, will restart: <nil>
I0729 03:20:55.926516    7181 out.go:177] * Restarting existing qemu2 VM for "functional-568000" ...
I0729 03:20:55.930515    7181 qemu.go:418] Using hvf for hardware acceleration
I0729 03:20:55.930559    7181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:63:62:f4:f5:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/disk.qcow2
I0729 03:20:55.932525    7181 main.go:141] libmachine: STDOUT: 
I0729 03:20:55.932540    7181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 03:20:55.932566    7181 fix.go:56] duration metric: took 13.455ms for fixHost
I0729 03:20:55.932569    7181 start.go:83] releasing machines lock for "functional-568000", held for 13.469375ms
W0729 03:20:55.932574    7181 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 03:20:55.932613    7181 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 03:20:55.932618    7181 start.go:729] Will try again in 5 seconds ...
I0729 03:21:00.934717    7181 start.go:360] acquireMachinesLock for functional-568000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 03:21:00.935083    7181 start.go:364] duration metric: took 292.875µs to acquireMachinesLock for "functional-568000"
I0729 03:21:00.935187    7181 start.go:96] Skipping create...Using existing machine configuration
I0729 03:21:00.935197    7181 fix.go:54] fixHost starting: 
I0729 03:21:00.935795    7181 fix.go:112] recreateIfNeeded on functional-568000: state=Stopped err=<nil>
W0729 03:21:00.935813    7181 fix.go:138] unexpected machine state, will restart: <nil>
I0729 03:21:00.944276    7181 out.go:177] * Restarting existing qemu2 VM for "functional-568000" ...
I0729 03:21:00.947207    7181 qemu.go:418] Using hvf for hardware acceleration
I0729 03:21:00.947412    7181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:63:62:f4:f5:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/disk.qcow2
I0729 03:21:00.955034    7181 main.go:141] libmachine: STDOUT: 
I0729 03:21:00.955083    7181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 03:21:00.955155    7181 fix.go:56] duration metric: took 19.958875ms for fixHost
I0729 03:21:00.955169    7181 start.go:83] releasing machines lock for "functional-568000", held for 20.073166ms
W0729 03:21:00.955296    7181 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-568000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 03:21:00.961293    7181 out.go:177] 
W0729 03:21:00.965278    7181 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 03:21:00.965298    7181 out.go:239] * 
W0729 03:21:00.967717    7181 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 03:21:00.977183    7181 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-568000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-568000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd427027774/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-462000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
|         | -p download-only-462000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
| delete  | -p download-only-462000                                                  | download-only-462000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
| start   | -o=json --download-only                                                  | download-only-278000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
|         | -p download-only-278000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
| delete  | -p download-only-278000                                                  | download-only-278000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
| start   | -o=json --download-only                                                  | download-only-881000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
|         | -p download-only-881000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| delete  | -p download-only-881000                                                  | download-only-881000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| delete  | -p download-only-462000                                                  | download-only-462000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| delete  | -p download-only-278000                                                  | download-only-278000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| delete  | -p download-only-881000                                                  | download-only-881000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| start   | --download-only -p                                                       | binary-mirror-847000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | binary-mirror-847000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51039                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-847000                                                  | binary-mirror-847000 | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| addons  | enable dashboard -p                                                      | addons-797000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | addons-797000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-797000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | addons-797000                                                            |                      |         |         |                     |                     |
| start   | -p addons-797000 --wait=true                                             | addons-797000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-797000                                                         | addons-797000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| start   | -p nospam-284000 -n=1 --memory=2250 --wait=false                         | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-284000 --log_dir                                                  | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-284000                                                         | nospam-284000        | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| start   | -p functional-568000                                                     | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-568000                                                     | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-568000 cache add                                              | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-568000 cache add                                              | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-568000 cache add                                              | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-568000 cache add                                              | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | minikube-local-cache-test:functional-568000                              |                      |         |         |                     |                     |
| cache   | functional-568000 cache delete                                           | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | minikube-local-cache-test:functional-568000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| ssh     | functional-568000 ssh sudo                                               | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-568000                                                        | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-568000 ssh                                                    | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-568000 cache reload                                           | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
| ssh     | functional-568000 ssh                                                    | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT | 29 Jul 24 03:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-568000 kubectl --                                             | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | --context functional-568000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-568000                                                     | functional-568000    | jenkins | v1.33.1 | 29 Jul 24 03:20 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/29 03:20:55
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0729 03:20:55.830260    7181 out.go:291] Setting OutFile to fd 1 ...
I0729 03:20:55.830380    7181 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:20:55.830382    7181 out.go:304] Setting ErrFile to fd 2...
I0729 03:20:55.830384    7181 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:20:55.830516    7181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
I0729 03:20:55.831545    7181 out.go:298] Setting JSON to false
I0729 03:20:55.847763    7181 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4824,"bootTime":1722243631,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0729 03:20:55.847842    7181 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0729 03:20:55.855601    7181 out.go:177] * [functional-568000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0729 03:20:55.864560    7181 out.go:177]   - MINIKUBE_LOCATION=19337
I0729 03:20:55.864617    7181 notify.go:220] Checking for updates...
I0729 03:20:55.874480    7181 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
I0729 03:20:55.877508    7181 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0729 03:20:55.878906    7181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0729 03:20:55.881525    7181 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
I0729 03:20:55.884509    7181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0729 03:20:55.887772    7181 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:20:55.887826    7181 driver.go:392] Setting default libvirt URI to qemu:///system
I0729 03:20:55.892498    7181 out.go:177] * Using the qemu2 driver based on existing profile
I0729 03:20:55.899476    7181 start.go:297] selected driver: qemu2
I0729 03:20:55.899481    7181 start.go:901] validating driver "qemu2" against &{Name:functional-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 03:20:55.899529    7181 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0729 03:20:55.902003    7181 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0729 03:20:55.902022    7181 cni.go:84] Creating CNI manager for ""
I0729 03:20:55.902030    7181 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0729 03:20:55.902084    7181 start.go:340] cluster config:
{Name:functional-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-568000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 03:20:55.905716    7181 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0729 03:20:55.914484    7181 out.go:177] * Starting "functional-568000" primary control-plane node in "functional-568000" cluster
I0729 03:20:55.918436    7181 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0729 03:20:55.918452    7181 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0729 03:20:55.918462    7181 cache.go:56] Caching tarball of preloaded images
I0729 03:20:55.918519    7181 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0729 03:20:55.918523    7181 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0729 03:20:55.918602    7181 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/functional-568000/config.json ...
I0729 03:20:55.919060    7181 start.go:360] acquireMachinesLock for functional-568000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 03:20:55.919098    7181 start.go:364] duration metric: took 30.583µs to acquireMachinesLock for "functional-568000"
I0729 03:20:55.919106    7181 start.go:96] Skipping create...Using existing machine configuration
I0729 03:20:55.919112    7181 fix.go:54] fixHost starting: 
I0729 03:20:55.919230    7181 fix.go:112] recreateIfNeeded on functional-568000: state=Stopped err=<nil>
W0729 03:20:55.919236    7181 fix.go:138] unexpected machine state, will restart: <nil>
I0729 03:20:55.926516    7181 out.go:177] * Restarting existing qemu2 VM for "functional-568000" ...
I0729 03:20:55.930515    7181 qemu.go:418] Using hvf for hardware acceleration
I0729 03:20:55.930559    7181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:63:62:f4:f5:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/disk.qcow2
I0729 03:20:55.932525    7181 main.go:141] libmachine: STDOUT: 
I0729 03:20:55.932540    7181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 03:20:55.932566    7181 fix.go:56] duration metric: took 13.455ms for fixHost
I0729 03:20:55.932569    7181 start.go:83] releasing machines lock for "functional-568000", held for 13.469375ms
W0729 03:20:55.932574    7181 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 03:20:55.932613    7181 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 03:20:55.932618    7181 start.go:729] Will try again in 5 seconds ...
I0729 03:21:00.934717    7181 start.go:360] acquireMachinesLock for functional-568000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 03:21:00.935083    7181 start.go:364] duration metric: took 292.875µs to acquireMachinesLock for "functional-568000"
I0729 03:21:00.935187    7181 start.go:96] Skipping create...Using existing machine configuration
I0729 03:21:00.935197    7181 fix.go:54] fixHost starting: 
I0729 03:21:00.935795    7181 fix.go:112] recreateIfNeeded on functional-568000: state=Stopped err=<nil>
W0729 03:21:00.935813    7181 fix.go:138] unexpected machine state, will restart: <nil>
I0729 03:21:00.944276    7181 out.go:177] * Restarting existing qemu2 VM for "functional-568000" ...
I0729 03:21:00.947207    7181 qemu.go:418] Using hvf for hardware acceleration
I0729 03:21:00.947412    7181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:63:62:f4:f5:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/functional-568000/disk.qcow2
I0729 03:21:00.955034    7181 main.go:141] libmachine: STDOUT: 
I0729 03:21:00.955083    7181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 03:21:00.955155    7181 fix.go:56] duration metric: took 19.958875ms for fixHost
I0729 03:21:00.955169    7181 start.go:83] releasing machines lock for "functional-568000", held for 20.073166ms
W0729 03:21:00.955296    7181 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-568000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 03:21:00.961293    7181 out.go:177] 
W0729 03:21:00.965278    7181 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 03:21:00.965298    7181 out.go:239] * 
W0729 03:21:00.967717    7181 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 03:21:00.977183    7181 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-568000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-568000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.275ms)

                                                
                                                
** stderr ** 
	error: context "functional-568000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-568000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-568000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-568000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-568000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-568000 --alsologtostderr -v=1] stderr:
I0729 03:21:41.526560    7492 out.go:291] Setting OutFile to fd 1 ...
I0729 03:21:41.526963    7492 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:21:41.526967    7492 out.go:304] Setting ErrFile to fd 2...
I0729 03:21:41.526970    7492 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:21:41.527170    7492 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
I0729 03:21:41.527474    7492 mustload.go:65] Loading cluster: functional-568000
I0729 03:21:41.527696    7492 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:21:41.531890    7492 out.go:177] * The control-plane node functional-568000 host is not running: state=Stopped
I0729 03:21:41.535861    7492 out.go:177]   To start a cluster, run: "minikube start -p functional-568000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (40.529375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 status: exit status 7 (29.397125ms)

                                                
                                                
-- stdout --
	functional-568000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-568000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (28.503833ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-568000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 status -o json: exit status 7 (28.640083ms)

                                                
                                                
-- stdout --
	{"Name":"functional-568000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-568000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (28.423084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-568000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-568000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.650375ms)

                                                
                                                
** stderr ** 
	error: context "functional-568000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-568000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-568000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-568000 describe po hello-node-connect: exit status 1 (25.755125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-568000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-568000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-568000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-568000 logs -l app=hello-node-connect: exit status 1 (25.75075ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-568000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-568000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-568000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-568000 describe svc hello-node-connect: exit status 1 (26.451667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-568000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-568000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (29.826542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-568000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (29.070208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "echo hello": exit status 83 (42.698709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-568000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-568000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-568000\"\n"*. args "out/minikube-darwin-arm64 -p functional-568000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "cat /etc/hostname": exit status 83 (50.803167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-568000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-568000"- but got *"* The control-plane node functional-568000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-568000\"\n"*. args "out/minikube-darwin-arm64 -p functional-568000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (30.194208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (51.567833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-568000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh -n functional-568000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh -n functional-568000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.879041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-568000 ssh -n functional-568000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-568000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-568000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 cp functional-568000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1593783316/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 cp functional-568000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1593783316/001/cp-test.txt: exit status 83 (41.691875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-568000 cp functional-568000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1593783316/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh -n functional-568000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh -n functional-568000 "sudo cat /home/docker/cp-test.txt": exit status 83 (40.839708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-568000 ssh -n functional-568000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1593783316/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-568000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-568000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (51.762416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-568000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh -n functional-568000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh -n functional-568000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (39.874541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-568000 ssh -n functional-568000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-568000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-568000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/6843/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "sudo cat /etc/test/nested/copy/6843/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "sudo cat /etc/test/nested/copy/6843/hosts": exit status 83 (45.998458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-568000 ssh "sudo cat /etc/test/nested/copy/6843/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-568000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-568000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-568000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-568000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (29.209625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/6843.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "sudo cat /etc/ssl/certs/6843.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "sudo cat /etc/ssl/certs/6843.pem": exit status 83 (46.764458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/6843.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-568000 ssh \"sudo cat /etc/ssl/certs/6843.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/6843.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-568000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-568000"
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/6843.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "sudo cat /usr/share/ca-certificates/6843.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "sudo cat /usr/share/ca-certificates/6843.pem": exit status 83 (44.581584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/6843.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-568000 ssh \"sudo cat /usr/share/ca-certificates/6843.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/6843.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-568000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-568000"
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (39.943583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-568000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-568000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-568000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/68432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "sudo cat /etc/ssl/certs/68432.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "sudo cat /etc/ssl/certs/68432.pem": exit status 83 (39.66025ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/68432.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-568000 ssh \"sudo cat /etc/ssl/certs/68432.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/68432.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-568000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-568000"
	"""
)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/68432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "sudo cat /usr/share/ca-certificates/68432.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "sudo cat /usr/share/ca-certificates/68432.pem": exit status 83 (39.774375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/68432.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-568000 ssh \"sudo cat /usr/share/ca-certificates/68432.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/68432.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-568000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-568000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (45.50475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-568000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-568000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-568000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (29.785166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-568000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-568000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.179208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-568000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-568000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-568000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-568000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-568000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-568000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-568000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-568000 -n functional-568000: exit status 7 (28.677208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-568000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "sudo systemctl is-active crio": exit status 83 (45.430167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-568000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-568000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 version -o=json --components: exit status 83 (40.852875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-568000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-568000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-568000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-568000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-568000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-568000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-568000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-568000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-568000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-568000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-568000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-568000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-568000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-568000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-568000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-568000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-568000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-568000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-568000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-568000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-568000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-568000 image ls --format short --alsologtostderr:
I0729 03:21:41.958428    7509 out.go:291] Setting OutFile to fd 1 ...
I0729 03:21:41.958571    7509 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:21:41.958575    7509 out.go:304] Setting ErrFile to fd 2...
I0729 03:21:41.958577    7509 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:21:41.958725    7509 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
I0729 03:21:41.959176    7509 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:21:41.959240    7509 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-568000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-568000 image ls --format table --alsologtostderr:
I0729 03:21:42.027677    7513 out.go:291] Setting OutFile to fd 1 ...
I0729 03:21:42.027817    7513 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:21:42.027820    7513 out.go:304] Setting ErrFile to fd 2...
I0729 03:21:42.027823    7513 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:21:42.027950    7513 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
I0729 03:21:42.028378    7513 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:21:42.028437    7513 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-568000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-568000 image ls --format json --alsologtostderr:
I0729 03:21:41.993065    7511 out.go:291] Setting OutFile to fd 1 ...
I0729 03:21:41.993220    7511 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:21:41.993223    7511 out.go:304] Setting ErrFile to fd 2...
I0729 03:21:41.993225    7511 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:21:41.993359    7511 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
I0729 03:21:41.993750    7511 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:21:41.993810    7511 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-568000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-568000 image ls --format yaml --alsologtostderr:
I0729 03:21:41.922367    7507 out.go:291] Setting OutFile to fd 1 ...
I0729 03:21:41.922519    7507 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:21:41.922522    7507 out.go:304] Setting ErrFile to fd 2...
I0729 03:21:41.922524    7507 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:21:41.922660    7507 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
I0729 03:21:41.923086    7507 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:21:41.923144    7507 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh pgrep buildkitd: exit status 83 (39.686083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image build -t localhost/my-image:functional-568000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-568000 image build -t localhost/my-image:functional-568000 testdata/build --alsologtostderr:
I0729 03:21:42.103199    7517 out.go:291] Setting OutFile to fd 1 ...
I0729 03:21:42.103551    7517 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:21:42.103555    7517 out.go:304] Setting ErrFile to fd 2...
I0729 03:21:42.103557    7517 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:21:42.103745    7517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
I0729 03:21:42.104158    7517 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:21:42.104580    7517 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:21:42.104827    7517 build_images.go:133] succeeded building to: 
I0729 03:21:42.104831    7517 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image ls
functional_test.go:442: expected "localhost/my-image:functional-568000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-568000 docker-env) && out/minikube-darwin-arm64 status -p functional-568000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-568000 docker-env) && out/minikube-darwin-arm64 status -p functional-568000": exit status 1 (45.833917ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 update-context --alsologtostderr -v=2: exit status 83 (39.806583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:21:41.798695    7501 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:21:41.799086    7501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:21:41.799090    7501 out.go:304] Setting ErrFile to fd 2...
	I0729 03:21:41.799093    7501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:21:41.799247    7501 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:21:41.799461    7501 mustload.go:65] Loading cluster: functional-568000
	I0729 03:21:41.799662    7501 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:21:41.802625    7501 out.go:177] * The control-plane node functional-568000 host is not running: state=Stopped
	I0729 03:21:41.806343    7501 out.go:177]   To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-568000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-568000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-568000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 update-context --alsologtostderr -v=2: exit status 83 (41.601042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:21:41.880236    7505 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:21:41.880389    7505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:21:41.880396    7505 out.go:304] Setting ErrFile to fd 2...
	I0729 03:21:41.880400    7505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:21:41.880532    7505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:21:41.880746    7505 mustload.go:65] Loading cluster: functional-568000
	I0729 03:21:41.880930    7505 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:21:41.885470    7505 out.go:177] * The control-plane node functional-568000 host is not running: state=Stopped
	I0729 03:21:41.889444    7505 out.go:177]   To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-568000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-568000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-568000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 update-context --alsologtostderr -v=2: exit status 83 (40.665958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:21:41.839471    7503 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:21:41.839622    7503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:21:41.839625    7503 out.go:304] Setting ErrFile to fd 2...
	I0729 03:21:41.839628    7503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:21:41.839756    7503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:21:41.839980    7503 mustload.go:65] Loading cluster: functional-568000
	I0729 03:21:41.840191    7503 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:21:41.843420    7503 out.go:177] * The control-plane node functional-568000 host is not running: state=Stopped
	I0729 03:21:41.847451    7503 out.go:177]   To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-568000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-568000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-568000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-568000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-568000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.663ms)

                                                
                                                
** stderr ** 
	error: context "functional-568000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-568000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 service list: exit status 83 (43.073417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-568000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-568000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-568000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 service list -o json: exit status 83 (41.815ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-568000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 service --namespace=default --https --url hello-node: exit status 83 (41.987ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-568000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 service hello-node --url --format={{.IP}}: exit status 83 (42.673666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-568000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-568000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-568000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 service hello-node --url: exit status 83 (42.925167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-568000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-568000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-568000"
functional_test.go:1565: failed to parse "* The control-plane node functional-568000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-568000\"": parse "* The control-plane node functional-568000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-568000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-568000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-568000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0729 03:21:02.749053    7301 out.go:291] Setting OutFile to fd 1 ...
I0729 03:21:02.749167    7301 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:21:02.749170    7301 out.go:304] Setting ErrFile to fd 2...
I0729 03:21:02.749172    7301 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:21:02.749305    7301 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
I0729 03:21:02.749526    7301 mustload.go:65] Loading cluster: functional-568000
I0729 03:21:02.749729    7301 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:21:02.754098    7301 out.go:177] * The control-plane node functional-568000 host is not running: state=Stopped
I0729 03:21:02.767180    7301 out.go:177]   To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
stdout: * The control-plane node functional-568000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-568000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-568000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-568000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-568000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-568000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7300: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-568000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-568000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-568000": client config: context "functional-568000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (90.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-568000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-568000 get svc nginx-svc: exit status 1 (69.22725ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-568000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-568000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (90.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image load --daemon docker.io/kicbase/echo-server:functional-568000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-568000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image load --daemon docker.io/kicbase/echo-server:functional-568000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-568000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-568000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image load --daemon docker.io/kicbase/echo-server:functional-568000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-568000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image save docker.io/kicbase/echo-server:functional-568000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-568000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.033772542s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-844000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-844000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.769063708s)

                                                
                                                
-- stdout --
	* [ha-844000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-844000" primary control-plane node in "ha-844000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-844000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:23:37.875168    7564 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:23:37.875286    7564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:23:37.875290    7564 out.go:304] Setting ErrFile to fd 2...
	I0729 03:23:37.875293    7564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:23:37.875410    7564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:23:37.876486    7564 out.go:298] Setting JSON to false
	I0729 03:23:37.892621    7564 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4986,"bootTime":1722243631,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:23:37.892735    7564 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:23:37.898027    7564 out.go:177] * [ha-844000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:23:37.906194    7564 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:23:37.906252    7564 notify.go:220] Checking for updates...
	I0729 03:23:37.912173    7564 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:23:37.915215    7564 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:23:37.918144    7564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:23:37.922183    7564 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:23:37.925192    7564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:23:37.928186    7564 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:23:37.931089    7564 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:23:37.938121    7564 start.go:297] selected driver: qemu2
	I0729 03:23:37.938131    7564 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:23:37.938139    7564 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:23:37.940407    7564 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:23:37.943129    7564 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:23:37.947258    7564 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:23:37.947282    7564 cni.go:84] Creating CNI manager for ""
	I0729 03:23:37.947287    7564 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 03:23:37.947292    7564 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 03:23:37.947330    7564 start.go:340] cluster config:
	{Name:ha-844000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-844000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:23:37.950992    7564 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:23:37.960108    7564 out.go:177] * Starting "ha-844000" primary control-plane node in "ha-844000" cluster
	I0729 03:23:37.964003    7564 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:23:37.964019    7564 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:23:37.964027    7564 cache.go:56] Caching tarball of preloaded images
	I0729 03:23:37.964093    7564 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:23:37.964099    7564 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:23:37.964285    7564 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/ha-844000/config.json ...
	I0729 03:23:37.964296    7564 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/ha-844000/config.json: {Name:mkadfd174bbc9e909f8889dc70bd708c9da5a912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:23:37.964650    7564 start.go:360] acquireMachinesLock for ha-844000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:23:37.964684    7564 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "ha-844000"
	I0729 03:23:37.964697    7564 start.go:93] Provisioning new machine with config: &{Name:ha-844000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-844000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:23:37.964723    7564 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:23:37.973085    7564 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:23:37.988866    7564 start.go:159] libmachine.API.Create for "ha-844000" (driver="qemu2")
	I0729 03:23:37.988895    7564 client.go:168] LocalClient.Create starting
	I0729 03:23:37.988977    7564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:23:37.989007    7564 main.go:141] libmachine: Decoding PEM data...
	I0729 03:23:37.989016    7564 main.go:141] libmachine: Parsing certificate...
	I0729 03:23:37.989053    7564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:23:37.989076    7564 main.go:141] libmachine: Decoding PEM data...
	I0729 03:23:37.989084    7564 main.go:141] libmachine: Parsing certificate...
	I0729 03:23:37.989560    7564 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:23:38.140287    7564 main.go:141] libmachine: Creating SSH key...
	I0729 03:23:38.242667    7564 main.go:141] libmachine: Creating Disk image...
	I0729 03:23:38.242672    7564 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:23:38.242905    7564 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2
	I0729 03:23:38.252421    7564 main.go:141] libmachine: STDOUT: 
	I0729 03:23:38.252438    7564 main.go:141] libmachine: STDERR: 
	I0729 03:23:38.252494    7564 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2 +20000M
	I0729 03:23:38.260271    7564 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:23:38.260284    7564 main.go:141] libmachine: STDERR: 
	I0729 03:23:38.260301    7564 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2
	I0729 03:23:38.260305    7564 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:23:38.260316    7564 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:23:38.260351    7564 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:11:4e:c9:4f:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2
	I0729 03:23:38.261919    7564 main.go:141] libmachine: STDOUT: 
	I0729 03:23:38.261935    7564 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:23:38.261953    7564 client.go:171] duration metric: took 273.059709ms to LocalClient.Create
	I0729 03:23:40.264091    7564 start.go:128] duration metric: took 2.299395416s to createHost
	I0729 03:23:40.264158    7564 start.go:83] releasing machines lock for "ha-844000", held for 2.299503791s
	W0729 03:23:40.264265    7564 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:23:40.270690    7564 out.go:177] * Deleting "ha-844000" in qemu2 ...
	W0729 03:23:40.297927    7564 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:23:40.297957    7564 start.go:729] Will try again in 5 seconds ...
	I0729 03:23:45.299786    7564 start.go:360] acquireMachinesLock for ha-844000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:23:45.300284    7564 start.go:364] duration metric: took 339.5µs to acquireMachinesLock for "ha-844000"
	I0729 03:23:45.300426    7564 start.go:93] Provisioning new machine with config: &{Name:ha-844000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-844000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:23:45.300696    7564 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:23:45.310390    7564 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:23:45.360919    7564 start.go:159] libmachine.API.Create for "ha-844000" (driver="qemu2")
	I0729 03:23:45.360960    7564 client.go:168] LocalClient.Create starting
	I0729 03:23:45.361070    7564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:23:45.361137    7564 main.go:141] libmachine: Decoding PEM data...
	I0729 03:23:45.361157    7564 main.go:141] libmachine: Parsing certificate...
	I0729 03:23:45.361228    7564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:23:45.361282    7564 main.go:141] libmachine: Decoding PEM data...
	I0729 03:23:45.361294    7564 main.go:141] libmachine: Parsing certificate...
	I0729 03:23:45.362005    7564 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:23:45.525975    7564 main.go:141] libmachine: Creating SSH key...
	I0729 03:23:45.552360    7564 main.go:141] libmachine: Creating Disk image...
	I0729 03:23:45.552366    7564 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:23:45.552568    7564 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2
	I0729 03:23:45.561855    7564 main.go:141] libmachine: STDOUT: 
	I0729 03:23:45.561873    7564 main.go:141] libmachine: STDERR: 
	I0729 03:23:45.561918    7564 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2 +20000M
	I0729 03:23:45.569653    7564 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:23:45.569666    7564 main.go:141] libmachine: STDERR: 
	I0729 03:23:45.569681    7564 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2
	I0729 03:23:45.569684    7564 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:23:45.569691    7564 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:23:45.569715    7564 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:8d:60:48:7b:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2
	I0729 03:23:45.571322    7564 main.go:141] libmachine: STDOUT: 
	I0729 03:23:45.571338    7564 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:23:45.571350    7564 client.go:171] duration metric: took 210.389542ms to LocalClient.Create
	I0729 03:23:47.573497    7564 start.go:128] duration metric: took 2.272815667s to createHost
	I0729 03:23:47.573563    7564 start.go:83] releasing machines lock for "ha-844000", held for 2.273295916s
	W0729 03:23:47.573895    7564 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-844000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-844000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:23:47.584635    7564 out.go:177] 
	W0729 03:23:47.590759    7564 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:23:47.590811    7564 out.go:239] * 
	* 
	W0729 03:23:47.593502    7564 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:23:47.602729    7564 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-844000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (67.394959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (115.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (58.503833ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-844000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- rollout status deployment/busybox: exit status 1 (56.600375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.009708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.026958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.418375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.246875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.85375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.583125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.427667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.794541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.803833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.965167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.289625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.102375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.761625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.567291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.140166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (30.06ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (115.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-844000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.073333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-844000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (29.388458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-844000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-844000 -v=7 --alsologtostderr: exit status 83 (42.056708ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-844000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-844000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:25:43.153552    7648 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:25:43.154123    7648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:43.154127    7648 out.go:304] Setting ErrFile to fd 2...
	I0729 03:25:43.154129    7648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:43.154303    7648 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:25:43.154549    7648 mustload.go:65] Loading cluster: ha-844000
	I0729 03:25:43.154735    7648 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:25:43.158806    7648 out.go:177] * The control-plane node ha-844000 host is not running: state=Stopped
	I0729 03:25:43.161804    7648 out.go:177]   To start a cluster, run: "minikube start -p ha-844000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-844000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (29.839625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-844000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-844000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.208125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-844000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-844000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-844000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (29.502916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-844000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-844000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-844000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-844000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-844000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-844000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-844000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-844000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (29.461542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 status --output json -v=7 --alsologtostderr: exit status 7 (28.956209ms)

                                                
                                                
-- stdout --
	{"Name":"ha-844000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:25:43.356825    7660 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:25:43.356973    7660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:43.356976    7660 out.go:304] Setting ErrFile to fd 2...
	I0729 03:25:43.356978    7660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:43.357121    7660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:25:43.357234    7660 out.go:298] Setting JSON to true
	I0729 03:25:43.357243    7660 mustload.go:65] Loading cluster: ha-844000
	I0729 03:25:43.357302    7660 notify.go:220] Checking for updates...
	I0729 03:25:43.357459    7660 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:25:43.357468    7660 status.go:255] checking status of ha-844000 ...
	I0729 03:25:43.357692    7660 status.go:330] ha-844000 host status = "Stopped" (err=<nil>)
	I0729 03:25:43.357696    7660 status.go:343] host is not running, skipping remaining checks
	I0729 03:25:43.357698    7660 status.go:257] ha-844000 status: &{Name:ha-844000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-844000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (29.08ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 node stop m02 -v=7 --alsologtostderr: exit status 85 (44.577584ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:25:43.415621    7664 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:25:43.416209    7664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:43.416213    7664 out.go:304] Setting ErrFile to fd 2...
	I0729 03:25:43.416215    7664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:43.416405    7664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:25:43.416655    7664 mustload.go:65] Loading cluster: ha-844000
	I0729 03:25:43.416849    7664 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:25:43.420998    7664 out.go:177] 
	W0729 03:25:43.424002    7664 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0729 03:25:43.424007    7664 out.go:239] * 
	* 
	W0729 03:25:43.425981    7664 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:25:43.428931    7664 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-844000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr: exit status 7 (29.323166ms)

                                                
                                                
-- stdout --
	ha-844000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:25:43.460693    7666 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:25:43.460857    7666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:43.460860    7666 out.go:304] Setting ErrFile to fd 2...
	I0729 03:25:43.460862    7666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:43.460975    7666 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:25:43.461089    7666 out.go:298] Setting JSON to false
	I0729 03:25:43.461098    7666 mustload.go:65] Loading cluster: ha-844000
	I0729 03:25:43.461160    7666 notify.go:220] Checking for updates...
	I0729 03:25:43.461281    7666 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:25:43.461288    7666 status.go:255] checking status of ha-844000 ...
	I0729 03:25:43.461489    7666 status.go:330] ha-844000 host status = "Stopped" (err=<nil>)
	I0729 03:25:43.461492    7666 status.go:343] host is not running, skipping remaining checks
	I0729 03:25:43.461494    7666 status.go:257] ha-844000 status: &{Name:ha-844000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr": ha-844000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr": ha-844000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr": ha-844000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr": ha-844000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (29.735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-844000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-844000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-844000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-844000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (28.7745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (55.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 node start m02 -v=7 --alsologtostderr: exit status 85 (46.554875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:25:43.597687    7675 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:25:43.598121    7675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:43.598125    7675 out.go:304] Setting ErrFile to fd 2...
	I0729 03:25:43.598127    7675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:43.598291    7675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:25:43.598506    7675 mustload.go:65] Loading cluster: ha-844000
	I0729 03:25:43.598684    7675 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:25:43.602110    7675 out.go:177] 
	W0729 03:25:43.605827    7675 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0729 03:25:43.605833    7675 out.go:239] * 
	* 
	W0729 03:25:43.607685    7675 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:25:43.611945    7675 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0729 03:25:43.597687    7675 out.go:291] Setting OutFile to fd 1 ...
I0729 03:25:43.598121    7675 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:25:43.598125    7675 out.go:304] Setting ErrFile to fd 2...
I0729 03:25:43.598127    7675 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:25:43.598291    7675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
I0729 03:25:43.598506    7675 mustload.go:65] Loading cluster: ha-844000
I0729 03:25:43.598684    7675 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:25:43.602110    7675 out.go:177] 
W0729 03:25:43.605827    7675 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0729 03:25:43.605833    7675 out.go:239] * 
* 
W0729 03:25:43.607685    7675 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 03:25:43.611945    7675 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-844000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr: exit status 7 (29.135792ms)

                                                
                                                
-- stdout --
	ha-844000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:25:43.643476    7677 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:25:43.643642    7677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:43.643646    7677 out.go:304] Setting ErrFile to fd 2...
	I0729 03:25:43.643648    7677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:43.643775    7677 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:25:43.643894    7677 out.go:298] Setting JSON to false
	I0729 03:25:43.643903    7677 mustload.go:65] Loading cluster: ha-844000
	I0729 03:25:43.643974    7677 notify.go:220] Checking for updates...
	I0729 03:25:43.644111    7677 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:25:43.644117    7677 status.go:255] checking status of ha-844000 ...
	I0729 03:25:43.644332    7677 status.go:330] ha-844000 host status = "Stopped" (err=<nil>)
	I0729 03:25:43.644336    7677 status.go:343] host is not running, skipping remaining checks
	I0729 03:25:43.644338    7677 status.go:257] ha-844000 status: &{Name:ha-844000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr: exit status 7 (73.075583ms)

                                                
                                                
-- stdout --
	ha-844000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:25:44.391403    7679 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:25:44.391833    7679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:44.391839    7679 out.go:304] Setting ErrFile to fd 2...
	I0729 03:25:44.391843    7679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:44.392119    7679 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:25:44.392320    7679 out.go:298] Setting JSON to false
	I0729 03:25:44.392331    7679 mustload.go:65] Loading cluster: ha-844000
	I0729 03:25:44.392381    7679 notify.go:220] Checking for updates...
	I0729 03:25:44.392936    7679 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:25:44.392950    7679 status.go:255] checking status of ha-844000 ...
	I0729 03:25:44.393225    7679 status.go:330] ha-844000 host status = "Stopped" (err=<nil>)
	I0729 03:25:44.393231    7679 status.go:343] host is not running, skipping remaining checks
	I0729 03:25:44.393234    7679 status.go:257] ha-844000 status: &{Name:ha-844000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr: exit status 7 (72.834917ms)

                                                
                                                
-- stdout --
	ha-844000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:25:45.745801    7681 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:25:45.745970    7681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:45.745974    7681 out.go:304] Setting ErrFile to fd 2...
	I0729 03:25:45.745977    7681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:45.746135    7681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:25:45.746292    7681 out.go:298] Setting JSON to false
	I0729 03:25:45.746305    7681 mustload.go:65] Loading cluster: ha-844000
	I0729 03:25:45.746348    7681 notify.go:220] Checking for updates...
	I0729 03:25:45.746585    7681 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:25:45.746594    7681 status.go:255] checking status of ha-844000 ...
	I0729 03:25:45.746884    7681 status.go:330] ha-844000 host status = "Stopped" (err=<nil>)
	I0729 03:25:45.746889    7681 status.go:343] host is not running, skipping remaining checks
	I0729 03:25:45.746892    7681 status.go:257] ha-844000 status: &{Name:ha-844000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr: exit status 7 (72.521083ms)

                                                
                                                
-- stdout --
	ha-844000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:25:47.903659    7683 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:25:47.903881    7683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:47.903886    7683 out.go:304] Setting ErrFile to fd 2...
	I0729 03:25:47.903890    7683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:47.904088    7683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:25:47.904251    7683 out.go:298] Setting JSON to false
	I0729 03:25:47.904265    7683 mustload.go:65] Loading cluster: ha-844000
	I0729 03:25:47.904316    7683 notify.go:220] Checking for updates...
	I0729 03:25:47.904536    7683 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:25:47.904545    7683 status.go:255] checking status of ha-844000 ...
	I0729 03:25:47.904853    7683 status.go:330] ha-844000 host status = "Stopped" (err=<nil>)
	I0729 03:25:47.904858    7683 status.go:343] host is not running, skipping remaining checks
	I0729 03:25:47.904861    7683 status.go:257] ha-844000 status: &{Name:ha-844000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr: exit status 7 (74.245208ms)

                                                
                                                
-- stdout --
	ha-844000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:25:52.443294    7685 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:25:52.443535    7685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:52.443540    7685 out.go:304] Setting ErrFile to fd 2...
	I0729 03:25:52.443542    7685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:52.443744    7685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:25:52.443930    7685 out.go:298] Setting JSON to false
	I0729 03:25:52.443947    7685 mustload.go:65] Loading cluster: ha-844000
	I0729 03:25:52.443983    7685 notify.go:220] Checking for updates...
	I0729 03:25:52.444213    7685 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:25:52.444223    7685 status.go:255] checking status of ha-844000 ...
	I0729 03:25:52.444513    7685 status.go:330] ha-844000 host status = "Stopped" (err=<nil>)
	I0729 03:25:52.444518    7685 status.go:343] host is not running, skipping remaining checks
	I0729 03:25:52.444521    7685 status.go:257] ha-844000 status: &{Name:ha-844000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr: exit status 7 (74.7195ms)

                                                
                                                
-- stdout --
	ha-844000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:25:59.913047    7687 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:25:59.913241    7687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:59.913245    7687 out.go:304] Setting ErrFile to fd 2...
	I0729 03:25:59.913248    7687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:25:59.913444    7687 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:25:59.913607    7687 out.go:298] Setting JSON to false
	I0729 03:25:59.913619    7687 mustload.go:65] Loading cluster: ha-844000
	I0729 03:25:59.913655    7687 notify.go:220] Checking for updates...
	I0729 03:25:59.913892    7687 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:25:59.913900    7687 status.go:255] checking status of ha-844000 ...
	I0729 03:25:59.914173    7687 status.go:330] ha-844000 host status = "Stopped" (err=<nil>)
	I0729 03:25:59.914178    7687 status.go:343] host is not running, skipping remaining checks
	I0729 03:25:59.914181    7687 status.go:257] ha-844000 status: &{Name:ha-844000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr: exit status 7 (75.562541ms)

                                                
                                                
-- stdout --
	ha-844000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:26:03.814710    7689 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:26:03.814937    7689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:03.814942    7689 out.go:304] Setting ErrFile to fd 2...
	I0729 03:26:03.814945    7689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:03.815117    7689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:26:03.815283    7689 out.go:298] Setting JSON to false
	I0729 03:26:03.815296    7689 mustload.go:65] Loading cluster: ha-844000
	I0729 03:26:03.815344    7689 notify.go:220] Checking for updates...
	I0729 03:26:03.815549    7689 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:26:03.815557    7689 status.go:255] checking status of ha-844000 ...
	I0729 03:26:03.815836    7689 status.go:330] ha-844000 host status = "Stopped" (err=<nil>)
	I0729 03:26:03.815841    7689 status.go:343] host is not running, skipping remaining checks
	I0729 03:26:03.815844    7689 status.go:257] ha-844000 status: &{Name:ha-844000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr: exit status 7 (66.876416ms)

                                                
                                                
-- stdout --
	ha-844000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:26:15.217960    7694 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:26:15.218175    7694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:15.218182    7694 out.go:304] Setting ErrFile to fd 2...
	I0729 03:26:15.218186    7694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:15.218407    7694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:26:15.218654    7694 out.go:298] Setting JSON to false
	I0729 03:26:15.218675    7694 mustload.go:65] Loading cluster: ha-844000
	I0729 03:26:15.218719    7694 notify.go:220] Checking for updates...
	I0729 03:26:15.218995    7694 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:26:15.219006    7694 status.go:255] checking status of ha-844000 ...
	I0729 03:26:15.219338    7694 status.go:330] ha-844000 host status = "Stopped" (err=<nil>)
	I0729 03:26:15.219344    7694 status.go:343] host is not running, skipping remaining checks
	I0729 03:26:15.219348    7694 status.go:257] ha-844000 status: &{Name:ha-844000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr: exit status 7 (72.807708ms)

                                                
                                                
-- stdout --
	ha-844000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:26:38.557877    7698 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:26:38.558106    7698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:38.558111    7698 out.go:304] Setting ErrFile to fd 2...
	I0729 03:26:38.558114    7698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:38.558322    7698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:26:38.558504    7698 out.go:298] Setting JSON to false
	I0729 03:26:38.558517    7698 mustload.go:65] Loading cluster: ha-844000
	I0729 03:26:38.558545    7698 notify.go:220] Checking for updates...
	I0729 03:26:38.558783    7698 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:26:38.558793    7698 status.go:255] checking status of ha-844000 ...
	I0729 03:26:38.559080    7698 status.go:330] ha-844000 host status = "Stopped" (err=<nil>)
	I0729 03:26:38.559085    7698 status.go:343] host is not running, skipping remaining checks
	I0729 03:26:38.559088    7698 status.go:257] ha-844000 status: &{Name:ha-844000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (33.292166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (55.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-844000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-844000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-844000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-844000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-844000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-844000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-844000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-844000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (29.477292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-844000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-844000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-844000 -v=7 --alsologtostderr: (2.802952292s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-844000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-844000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.225270667s)

                                                
                                                
-- stdout --
	* [ha-844000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-844000" primary control-plane node in "ha-844000" cluster
	* Restarting existing qemu2 VM for "ha-844000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-844000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:26:41.568447    7727 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:26:41.568670    7727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:41.568674    7727 out.go:304] Setting ErrFile to fd 2...
	I0729 03:26:41.568678    7727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:41.568868    7727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:26:41.570182    7727 out.go:298] Setting JSON to false
	I0729 03:26:41.590297    7727 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5170,"bootTime":1722243631,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:26:41.590360    7727 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:26:41.595292    7727 out.go:177] * [ha-844000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:26:41.602179    7727 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:26:41.602224    7727 notify.go:220] Checking for updates...
	I0729 03:26:41.610135    7727 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:26:41.613136    7727 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:26:41.614437    7727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:26:41.617107    7727 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:26:41.620162    7727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:26:41.623421    7727 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:26:41.623488    7727 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:26:41.628120    7727 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:26:41.635095    7727 start.go:297] selected driver: qemu2
	I0729 03:26:41.635102    7727 start.go:901] validating driver "qemu2" against &{Name:ha-844000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-844000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:26:41.635157    7727 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:26:41.637740    7727 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:26:41.637782    7727 cni.go:84] Creating CNI manager for ""
	I0729 03:26:41.637788    7727 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 03:26:41.637842    7727 start.go:340] cluster config:
	{Name:ha-844000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-844000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:26:41.641752    7727 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:26:41.650107    7727 out.go:177] * Starting "ha-844000" primary control-plane node in "ha-844000" cluster
	I0729 03:26:41.654125    7727 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:26:41.654145    7727 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:26:41.654157    7727 cache.go:56] Caching tarball of preloaded images
	I0729 03:26:41.654234    7727 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:26:41.654241    7727 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:26:41.654316    7727 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/ha-844000/config.json ...
	I0729 03:26:41.654777    7727 start.go:360] acquireMachinesLock for ha-844000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:26:41.654817    7727 start.go:364] duration metric: took 32.875µs to acquireMachinesLock for "ha-844000"
	I0729 03:26:41.654829    7727 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:26:41.654835    7727 fix.go:54] fixHost starting: 
	I0729 03:26:41.654996    7727 fix.go:112] recreateIfNeeded on ha-844000: state=Stopped err=<nil>
	W0729 03:26:41.655006    7727 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:26:41.663126    7727 out.go:177] * Restarting existing qemu2 VM for "ha-844000" ...
	I0729 03:26:41.667095    7727 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:26:41.667140    7727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:8d:60:48:7b:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2
	I0729 03:26:41.669713    7727 main.go:141] libmachine: STDOUT: 
	I0729 03:26:41.669739    7727 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:26:41.669772    7727 fix.go:56] duration metric: took 14.935334ms for fixHost
	I0729 03:26:41.669778    7727 start.go:83] releasing machines lock for "ha-844000", held for 14.956125ms
	W0729 03:26:41.669785    7727 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:26:41.669837    7727 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:26:41.669844    7727 start.go:729] Will try again in 5 seconds ...
	I0729 03:26:46.671866    7727 start.go:360] acquireMachinesLock for ha-844000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:26:46.672252    7727 start.go:364] duration metric: took 304.166µs to acquireMachinesLock for "ha-844000"
	I0729 03:26:46.672383    7727 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:26:46.672401    7727 fix.go:54] fixHost starting: 
	I0729 03:26:46.673100    7727 fix.go:112] recreateIfNeeded on ha-844000: state=Stopped err=<nil>
	W0729 03:26:46.673127    7727 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:26:46.680467    7727 out.go:177] * Restarting existing qemu2 VM for "ha-844000" ...
	I0729 03:26:46.684512    7727 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:26:46.684740    7727 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:8d:60:48:7b:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2
	I0729 03:26:46.693797    7727 main.go:141] libmachine: STDOUT: 
	I0729 03:26:46.693905    7727 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:26:46.693983    7727 fix.go:56] duration metric: took 21.583083ms for fixHost
	I0729 03:26:46.694011    7727 start.go:83] releasing machines lock for "ha-844000", held for 21.73375ms
	W0729 03:26:46.694201    7727 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-844000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-844000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:26:46.701438    7727 out.go:177] 
	W0729 03:26:46.705534    7727 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:26:46.705577    7727 out.go:239] * 
	* 
	W0729 03:26:46.708457    7727 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:26:46.715453    7727 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-844000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-844000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (33.118875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.132959ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-844000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-844000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:26:46.860929    7739 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:26:46.861276    7739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:46.861279    7739 out.go:304] Setting ErrFile to fd 2...
	I0729 03:26:46.861282    7739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:46.861410    7739 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:26:46.861640    7739 mustload.go:65] Loading cluster: ha-844000
	I0729 03:26:46.861825    7739 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:26:46.865261    7739 out.go:177] * The control-plane node ha-844000 host is not running: state=Stopped
	I0729 03:26:46.868087    7739 out.go:177]   To start a cluster, run: "minikube start -p ha-844000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-844000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr: exit status 7 (29.430167ms)

                                                
                                                
-- stdout --
	ha-844000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:26:46.899734    7741 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:26:46.899877    7741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:46.899880    7741 out.go:304] Setting ErrFile to fd 2...
	I0729 03:26:46.899883    7741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:46.900008    7741 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:26:46.900136    7741 out.go:298] Setting JSON to false
	I0729 03:26:46.900146    7741 mustload.go:65] Loading cluster: ha-844000
	I0729 03:26:46.900210    7741 notify.go:220] Checking for updates...
	I0729 03:26:46.900334    7741 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:26:46.900340    7741 status.go:255] checking status of ha-844000 ...
	I0729 03:26:46.900553    7741 status.go:330] ha-844000 host status = "Stopped" (err=<nil>)
	I0729 03:26:46.900556    7741 status.go:343] host is not running, skipping remaining checks
	I0729 03:26:46.900559    7741 status.go:257] ha-844000 status: &{Name:ha-844000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (29.23475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-844000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-844000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-844000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-844000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (29.248542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-844000 stop -v=7 --alsologtostderr: (3.393278375s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr: exit status 7 (66.690916ms)

                                                
                                                
-- stdout --
	ha-844000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:26:50.464248    7768 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:26:50.464425    7768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:50.464429    7768 out.go:304] Setting ErrFile to fd 2...
	I0729 03:26:50.464432    7768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:50.464599    7768 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:26:50.464750    7768 out.go:298] Setting JSON to false
	I0729 03:26:50.464762    7768 mustload.go:65] Loading cluster: ha-844000
	I0729 03:26:50.464805    7768 notify.go:220] Checking for updates...
	I0729 03:26:50.465006    7768 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:26:50.465015    7768 status.go:255] checking status of ha-844000 ...
	I0729 03:26:50.465288    7768 status.go:330] ha-844000 host status = "Stopped" (err=<nil>)
	I0729 03:26:50.465293    7768 status.go:343] host is not running, skipping remaining checks
	I0729 03:26:50.465296    7768 status.go:257] ha-844000 status: &{Name:ha-844000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr": ha-844000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr": ha-844000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-844000 status -v=7 --alsologtostderr": ha-844000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (32.589709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-844000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-844000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.176947583s)

                                                
                                                
-- stdout --
	* [ha-844000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-844000" primary control-plane node in "ha-844000" cluster
	* Restarting existing qemu2 VM for "ha-844000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-844000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:26:50.526000    7772 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:26:50.526158    7772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:50.526162    7772 out.go:304] Setting ErrFile to fd 2...
	I0729 03:26:50.526164    7772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:50.526295    7772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:26:50.527299    7772 out.go:298] Setting JSON to false
	I0729 03:26:50.543545    7772 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5179,"bootTime":1722243631,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:26:50.543614    7772 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:26:50.548299    7772 out.go:177] * [ha-844000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:26:50.556240    7772 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:26:50.556303    7772 notify.go:220] Checking for updates...
	I0729 03:26:50.564057    7772 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:26:50.567184    7772 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:26:50.571225    7772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:26:50.572629    7772 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:26:50.575261    7772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:26:50.578538    7772 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:26:50.578816    7772 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:26:50.580516    7772 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:26:50.587226    7772 start.go:297] selected driver: qemu2
	I0729 03:26:50.587235    7772 start.go:901] validating driver "qemu2" against &{Name:ha-844000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-844000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:26:50.587326    7772 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:26:50.589465    7772 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:26:50.589487    7772 cni.go:84] Creating CNI manager for ""
	I0729 03:26:50.589491    7772 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 03:26:50.589530    7772 start.go:340] cluster config:
	{Name:ha-844000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-844000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:26:50.592934    7772 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:26:50.601205    7772 out.go:177] * Starting "ha-844000" primary control-plane node in "ha-844000" cluster
	I0729 03:26:50.605251    7772 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:26:50.605266    7772 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:26:50.605275    7772 cache.go:56] Caching tarball of preloaded images
	I0729 03:26:50.605331    7772 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:26:50.605336    7772 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:26:50.605386    7772 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/ha-844000/config.json ...
	I0729 03:26:50.605809    7772 start.go:360] acquireMachinesLock for ha-844000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:26:50.605839    7772 start.go:364] duration metric: took 23.916µs to acquireMachinesLock for "ha-844000"
	I0729 03:26:50.605849    7772 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:26:50.605855    7772 fix.go:54] fixHost starting: 
	I0729 03:26:50.605969    7772 fix.go:112] recreateIfNeeded on ha-844000: state=Stopped err=<nil>
	W0729 03:26:50.605979    7772 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:26:50.614251    7772 out.go:177] * Restarting existing qemu2 VM for "ha-844000" ...
	I0729 03:26:50.618176    7772 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:26:50.618213    7772 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:8d:60:48:7b:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2
	I0729 03:26:50.620301    7772 main.go:141] libmachine: STDOUT: 
	I0729 03:26:50.620323    7772 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:26:50.620353    7772 fix.go:56] duration metric: took 14.498375ms for fixHost
	I0729 03:26:50.620357    7772 start.go:83] releasing machines lock for "ha-844000", held for 14.514542ms
	W0729 03:26:50.620364    7772 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:26:50.620399    7772 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:26:50.620404    7772 start.go:729] Will try again in 5 seconds ...
	I0729 03:26:55.622449    7772 start.go:360] acquireMachinesLock for ha-844000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:26:55.622931    7772 start.go:364] duration metric: took 330.167µs to acquireMachinesLock for "ha-844000"
	I0729 03:26:55.623089    7772 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:26:55.623110    7772 fix.go:54] fixHost starting: 
	I0729 03:26:55.623814    7772 fix.go:112] recreateIfNeeded on ha-844000: state=Stopped err=<nil>
	W0729 03:26:55.623841    7772 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:26:55.628216    7772 out.go:177] * Restarting existing qemu2 VM for "ha-844000" ...
	I0729 03:26:55.632172    7772 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:26:55.632384    7772 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:8d:60:48:7b:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/ha-844000/disk.qcow2
	I0729 03:26:55.641460    7772 main.go:141] libmachine: STDOUT: 
	I0729 03:26:55.641520    7772 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:26:55.641579    7772 fix.go:56] duration metric: took 18.47125ms for fixHost
	I0729 03:26:55.641597    7772 start.go:83] releasing machines lock for "ha-844000", held for 18.612458ms
	W0729 03:26:55.641780    7772 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-844000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-844000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:26:55.650199    7772 out.go:177] 
	W0729 03:26:55.654186    7772 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:26:55.654205    7772 out.go:239] * 
	* 
	W0729 03:26:55.657222    7772 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:26:55.663182    7772 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-844000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (64.749709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-844000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-844000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-844000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-844000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (29.939833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-844000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-844000 --control-plane -v=7 --alsologtostderr: exit status 83 (38.510958ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-844000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-844000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:26:55.848272    7787 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:26:55.848448    7787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:55.848451    7787 out.go:304] Setting ErrFile to fd 2...
	I0729 03:26:55.848454    7787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:26:55.848608    7787 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:26:55.848872    7787 mustload.go:65] Loading cluster: ha-844000
	I0729 03:26:55.849054    7787 config.go:182] Loaded profile config "ha-844000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:26:55.850862    7787 out.go:177] * The control-plane node ha-844000 host is not running: state=Stopped
	I0729 03:26:55.854036    7787 out.go:177]   To start a cluster, run: "minikube start -p ha-844000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-844000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (28.904791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-844000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-844000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-844000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-844000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-844000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-844000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-844000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-844000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-844000 -n ha-844000: exit status 7 (29.456459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-844000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-972000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-972000 --driver=qemu2 : exit status 80 (9.8426495s)

                                                
                                                
-- stdout --
	* [image-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-972000" primary control-plane node in "image-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-972000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-972000 -n image-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-972000 -n image-972000: exit status 7 (67.842667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-972000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (10.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-900000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-900000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (10.010909833s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b747aaa6-401d-4db9-b886-91ce73c1e004","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-900000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5916fb56-0d70-44dc-ae0a-ddeab2d8fab0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19337"}}
	{"specversion":"1.0","id":"099fbe98-a522-4a51-8e55-505b951e7d1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig"}}
	{"specversion":"1.0","id":"f8a36657-d1d8-4b8a-8df6-d0c47e596be6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"9d4570fb-2105-4bfd-95d5-35eada03994e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ae4afe4a-37b3-4c15-94f7-6cba9ddac46b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube"}}
	{"specversion":"1.0","id":"d2668400-d1af-4720-a9bf-5b1306ace8c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d4a9793e-53d1-4904-be17-8d44d31e4365","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"94ba86d5-df2b-40ea-89da-f0468a1a11cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"8a5fa115-34ca-4bfa-9199-94d62a670bd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-900000\" primary control-plane node in \"json-output-900000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"87d60d8a-2950-4142-aee0-2bcddfc3745f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"e70a9bbc-fb2f-4c33-a910-faf8f06d898d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-900000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a812a3d-0a3f-4675-bf1e-3145f07e75e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"7ec40519-071c-4b71-a9e9-21fc70a10a40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"d4aaff94-b977-40e0-9f6c-b05b66942f26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-900000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"1c54a3af-20dd-49ef-8387-3d6aef3c9f96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"a374b479-7b92-49bc-978f-58feddb313fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-900000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (10.01s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-900000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-900000 --output=json --user=testUser: exit status 83 (79.205375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"682e85e6-523c-4b92-b803-256311827238","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-900000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"9f19b63d-cb79-492f-ad40-43d5d6159e2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-900000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-900000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-900000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-900000 --output=json --user=testUser: exit status 83 (43.248458ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-900000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-900000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-900000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.26s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-692000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-692000 --driver=qemu2 : exit status 80 (9.963075084s)

                                                
                                                
-- stdout --
	* [first-692000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-692000" primary control-plane node in "first-692000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-692000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-692000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-692000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 03:27:28.472133 -0700 PDT m=+488.864155626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-693000 -n second-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-693000 -n second-693000: exit status 85 (83.509834ms)

                                                
                                                
-- stdout --
	* Profile "second-693000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-693000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-693000" host is not running, skipping log retrieval (state="* Profile \"second-693000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-693000\"")
helpers_test.go:175: Cleaning up "second-693000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-693000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 03:27:28.666807 -0700 PDT m=+489.058832751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-692000 -n first-692000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-692000 -n first-692000: exit status 7 (30.091042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-692000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-692000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-692000
--- FAIL: TestMinikubeProfile (10.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-814000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-814000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.906223125s)

                                                
                                                
-- stdout --
	* [mount-start-1-814000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-814000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-814000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-814000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-814000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-814000 -n mount-start-1-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-814000 -n mount-start-1-814000: exit status 7 (69.784417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-814000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-242000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-242000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.896773208s)

                                                
                                                
-- stdout --
	* [multinode-242000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-242000" primary control-plane node in "multinode-242000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-242000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:27:38.954847    7925 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:27:38.954974    7925 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:27:38.954977    7925 out.go:304] Setting ErrFile to fd 2...
	I0729 03:27:38.954980    7925 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:27:38.955107    7925 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:27:38.956152    7925 out.go:298] Setting JSON to false
	I0729 03:27:38.972386    7925 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5227,"bootTime":1722243631,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:27:38.972457    7925 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:27:38.978506    7925 out.go:177] * [multinode-242000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:27:38.986513    7925 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:27:38.986596    7925 notify.go:220] Checking for updates...
	I0729 03:27:38.994452    7925 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:27:38.998478    7925 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:27:39.001483    7925 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:27:39.004447    7925 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:27:39.007497    7925 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:27:39.010546    7925 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:27:39.014420    7925 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:27:39.021349    7925 start.go:297] selected driver: qemu2
	I0729 03:27:39.021355    7925 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:27:39.021361    7925 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:27:39.023604    7925 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:27:39.026452    7925 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:27:39.029529    7925 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:27:39.029543    7925 cni.go:84] Creating CNI manager for ""
	I0729 03:27:39.029547    7925 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 03:27:39.029550    7925 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 03:27:39.029580    7925 start.go:340] cluster config:
	{Name:multinode-242000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-242000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:27:39.033185    7925 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:27:39.041443    7925 out.go:177] * Starting "multinode-242000" primary control-plane node in "multinode-242000" cluster
	I0729 03:27:39.045314    7925 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:27:39.045332    7925 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:27:39.045342    7925 cache.go:56] Caching tarball of preloaded images
	I0729 03:27:39.045406    7925 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:27:39.045412    7925 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:27:39.045643    7925 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/multinode-242000/config.json ...
	I0729 03:27:39.045654    7925 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/multinode-242000/config.json: {Name:mkf6a5c2d58ea8c244b5067da6aa10698fc4af49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:27:39.045898    7925 start.go:360] acquireMachinesLock for multinode-242000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:27:39.045933    7925 start.go:364] duration metric: took 29.291µs to acquireMachinesLock for "multinode-242000"
	I0729 03:27:39.045946    7925 start.go:93] Provisioning new machine with config: &{Name:multinode-242000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-242000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:27:39.045987    7925 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:27:39.056442    7925 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:27:39.076601    7925 start.go:159] libmachine.API.Create for "multinode-242000" (driver="qemu2")
	I0729 03:27:39.076626    7925 client.go:168] LocalClient.Create starting
	I0729 03:27:39.076705    7925 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:27:39.076749    7925 main.go:141] libmachine: Decoding PEM data...
	I0729 03:27:39.076762    7925 main.go:141] libmachine: Parsing certificate...
	I0729 03:27:39.076804    7925 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:27:39.076831    7925 main.go:141] libmachine: Decoding PEM data...
	I0729 03:27:39.076843    7925 main.go:141] libmachine: Parsing certificate...
	I0729 03:27:39.077247    7925 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:27:39.230671    7925 main.go:141] libmachine: Creating SSH key...
	I0729 03:27:39.386019    7925 main.go:141] libmachine: Creating Disk image...
	I0729 03:27:39.386025    7925 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:27:39.386255    7925 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2
	I0729 03:27:39.395639    7925 main.go:141] libmachine: STDOUT: 
	I0729 03:27:39.395655    7925 main.go:141] libmachine: STDERR: 
	I0729 03:27:39.395710    7925 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2 +20000M
	I0729 03:27:39.403501    7925 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:27:39.403521    7925 main.go:141] libmachine: STDERR: 
	I0729 03:27:39.403534    7925 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2
	I0729 03:27:39.403542    7925 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:27:39.403551    7925 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:27:39.403588    7925 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:ac:e8:cf:15:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2
	I0729 03:27:39.405323    7925 main.go:141] libmachine: STDOUT: 
	I0729 03:27:39.405339    7925 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:27:39.405357    7925 client.go:171] duration metric: took 328.727417ms to LocalClient.Create
	I0729 03:27:41.407492    7925 start.go:128] duration metric: took 2.361529625s to createHost
	I0729 03:27:41.407537    7925 start.go:83] releasing machines lock for "multinode-242000", held for 2.36164s
	W0729 03:27:41.407605    7925 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:27:41.418888    7925 out.go:177] * Deleting "multinode-242000" in qemu2 ...
	W0729 03:27:41.452115    7925 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:27:41.452147    7925 start.go:729] Will try again in 5 seconds ...
	I0729 03:27:46.454255    7925 start.go:360] acquireMachinesLock for multinode-242000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:27:46.454662    7925 start.go:364] duration metric: took 333.25µs to acquireMachinesLock for "multinode-242000"
	I0729 03:27:46.454792    7925 start.go:93] Provisioning new machine with config: &{Name:multinode-242000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-242000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:27:46.455074    7925 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:27:46.468393    7925 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:27:46.518719    7925 start.go:159] libmachine.API.Create for "multinode-242000" (driver="qemu2")
	I0729 03:27:46.518758    7925 client.go:168] LocalClient.Create starting
	I0729 03:27:46.518868    7925 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:27:46.518932    7925 main.go:141] libmachine: Decoding PEM data...
	I0729 03:27:46.518950    7925 main.go:141] libmachine: Parsing certificate...
	I0729 03:27:46.519018    7925 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:27:46.519062    7925 main.go:141] libmachine: Decoding PEM data...
	I0729 03:27:46.519074    7925 main.go:141] libmachine: Parsing certificate...
	I0729 03:27:46.519660    7925 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:27:46.681047    7925 main.go:141] libmachine: Creating SSH key...
	I0729 03:27:46.760182    7925 main.go:141] libmachine: Creating Disk image...
	I0729 03:27:46.760187    7925 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:27:46.760415    7925 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2
	I0729 03:27:46.769743    7925 main.go:141] libmachine: STDOUT: 
	I0729 03:27:46.769758    7925 main.go:141] libmachine: STDERR: 
	I0729 03:27:46.769825    7925 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2 +20000M
	I0729 03:27:46.777573    7925 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:27:46.777586    7925 main.go:141] libmachine: STDERR: 
	I0729 03:27:46.777606    7925 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2
	I0729 03:27:46.777609    7925 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:27:46.777619    7925 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:27:46.777645    7925 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:52:53:05:3c:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2
	I0729 03:27:46.779254    7925 main.go:141] libmachine: STDOUT: 
	I0729 03:27:46.779267    7925 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:27:46.779279    7925 client.go:171] duration metric: took 260.520333ms to LocalClient.Create
	I0729 03:27:48.781407    7925 start.go:128] duration metric: took 2.326348792s to createHost
	I0729 03:27:48.781461    7925 start.go:83] releasing machines lock for "multinode-242000", held for 2.326817167s
	W0729 03:27:48.781833    7925 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-242000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-242000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:27:48.793626    7925 out.go:177] 
	W0729 03:27:48.797690    7925 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:27:48.797771    7925 out.go:239] * 
	* 
	W0729 03:27:48.800014    7925 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:27:48.809559    7925 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-242000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (67.249583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (97.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (58.461542ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-242000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- rollout status deployment/busybox: exit status 1 (55.514708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.921041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.748041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.147792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.648292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.836958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.427041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.801792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.690292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.006125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.511208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.794292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.186458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.278542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- exec  -- nslookup kubernetes.default: exit status 1 (54.924792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.740541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (30.240834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (97.74s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-242000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.56625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (29.403875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-242000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-242000 -v 3 --alsologtostderr: exit status 83 (39.908541ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-242000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-242000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:29:26.750175    8008 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:29:26.750344    8008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:26.750347    8008 out.go:304] Setting ErrFile to fd 2...
	I0729 03:29:26.750350    8008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:26.750477    8008 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:29:26.750725    8008 mustload.go:65] Loading cluster: multinode-242000
	I0729 03:29:26.750892    8008 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:29:26.755691    8008 out.go:177] * The control-plane node multinode-242000 host is not running: state=Stopped
	I0729 03:29:26.758563    8008 out.go:177]   To start a cluster, run: "minikube start -p multinode-242000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-242000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (29.311958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-242000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-242000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.528791ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-242000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-242000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-242000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (29.534625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-242000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-242000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-242000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-242000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (28.977833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 status --output json --alsologtostderr: exit status 7 (29.146583ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-242000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:29:26.959524    8020 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:29:26.959671    8020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:26.959674    8020 out.go:304] Setting ErrFile to fd 2...
	I0729 03:29:26.959676    8020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:26.959810    8020 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:29:26.959919    8020 out.go:298] Setting JSON to true
	I0729 03:29:26.959929    8020 mustload.go:65] Loading cluster: multinode-242000
	I0729 03:29:26.959999    8020 notify.go:220] Checking for updates...
	I0729 03:29:26.960112    8020 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:29:26.960119    8020 status.go:255] checking status of multinode-242000 ...
	I0729 03:29:26.960339    8020 status.go:330] multinode-242000 host status = "Stopped" (err=<nil>)
	I0729 03:29:26.960343    8020 status.go:343] host is not running, skipping remaining checks
	I0729 03:29:26.960346    8020 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-242000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (28.689167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 node stop m03: exit status 85 (46.341791ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-242000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 status: exit status 7 (29.668917ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 status --alsologtostderr: exit status 7 (29.671375ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:29:27.094640    8028 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:29:27.094792    8028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:27.094795    8028 out.go:304] Setting ErrFile to fd 2...
	I0729 03:29:27.094797    8028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:27.094924    8028 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:29:27.095049    8028 out.go:298] Setting JSON to false
	I0729 03:29:27.095062    8028 mustload.go:65] Loading cluster: multinode-242000
	I0729 03:29:27.095126    8028 notify.go:220] Checking for updates...
	I0729 03:29:27.095255    8028 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:29:27.095263    8028 status.go:255] checking status of multinode-242000 ...
	I0729 03:29:27.095470    8028 status.go:330] multinode-242000 host status = "Stopped" (err=<nil>)
	I0729 03:29:27.095474    8028 status.go:343] host is not running, skipping remaining checks
	I0729 03:29:27.095476    8028 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-242000 status --alsologtostderr": multinode-242000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (29.184417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (52.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.248125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:29:27.153370    8032 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:29:27.153744    8032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:27.153750    8032 out.go:304] Setting ErrFile to fd 2...
	I0729 03:29:27.153752    8032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:27.153906    8032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:29:27.154127    8032 mustload.go:65] Loading cluster: multinode-242000
	I0729 03:29:27.154300    8032 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:29:27.158236    8032 out.go:177] 
	W0729 03:29:27.162024    8032 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0729 03:29:27.162030    8032 out.go:239] * 
	* 
	W0729 03:29:27.164010    8032 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:29:27.168123    8032 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0729 03:29:27.153370    8032 out.go:291] Setting OutFile to fd 1 ...
I0729 03:29:27.153744    8032 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:29:27.153750    8032 out.go:304] Setting ErrFile to fd 2...
I0729 03:29:27.153752    8032 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:29:27.153906    8032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
I0729 03:29:27.154127    8032 mustload.go:65] Loading cluster: multinode-242000
I0729 03:29:27.154300    8032 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:29:27.158236    8032 out.go:177] 
W0729 03:29:27.162024    8032 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0729 03:29:27.162030    8032 out.go:239] * 
* 
W0729 03:29:27.164010    8032 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 03:29:27.168123    8032 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-242000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (29.265708ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:29:27.199832    8034 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:29:27.199969    8034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:27.199972    8034 out.go:304] Setting ErrFile to fd 2...
	I0729 03:29:27.199974    8034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:27.200119    8034 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:29:27.200231    8034 out.go:298] Setting JSON to false
	I0729 03:29:27.200240    8034 mustload.go:65] Loading cluster: multinode-242000
	I0729 03:29:27.200305    8034 notify.go:220] Checking for updates...
	I0729 03:29:27.200417    8034 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:29:27.200431    8034 status.go:255] checking status of multinode-242000 ...
	I0729 03:29:27.200636    8034 status.go:330] multinode-242000 host status = "Stopped" (err=<nil>)
	I0729 03:29:27.200640    8034 status.go:343] host is not running, skipping remaining checks
	I0729 03:29:27.200642    8034 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (74.027083ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:29:28.018929    8036 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:29:28.019141    8036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:28.019145    8036 out.go:304] Setting ErrFile to fd 2...
	I0729 03:29:28.019149    8036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:28.019329    8036 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:29:28.019490    8036 out.go:298] Setting JSON to false
	I0729 03:29:28.019503    8036 mustload.go:65] Loading cluster: multinode-242000
	I0729 03:29:28.019549    8036 notify.go:220] Checking for updates...
	I0729 03:29:28.019789    8036 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:29:28.019798    8036 status.go:255] checking status of multinode-242000 ...
	I0729 03:29:28.020068    8036 status.go:330] multinode-242000 host status = "Stopped" (err=<nil>)
	I0729 03:29:28.020073    8036 status.go:343] host is not running, skipping remaining checks
	I0729 03:29:28.020076    8036 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (72.301792ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:29:29.537004    8038 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:29:29.537229    8038 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:29.537233    8038 out.go:304] Setting ErrFile to fd 2...
	I0729 03:29:29.537236    8038 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:29.537408    8038 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:29:29.537589    8038 out.go:298] Setting JSON to false
	I0729 03:29:29.537604    8038 mustload.go:65] Loading cluster: multinode-242000
	I0729 03:29:29.537655    8038 notify.go:220] Checking for updates...
	I0729 03:29:29.537857    8038 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:29:29.537866    8038 status.go:255] checking status of multinode-242000 ...
	I0729 03:29:29.538187    8038 status.go:330] multinode-242000 host status = "Stopped" (err=<nil>)
	I0729 03:29:29.538192    8038 status.go:343] host is not running, skipping remaining checks
	I0729 03:29:29.538195    8038 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (76.509375ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:29:31.061355    8040 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:29:31.061566    8040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:31.061571    8040 out.go:304] Setting ErrFile to fd 2...
	I0729 03:29:31.061575    8040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:31.061768    8040 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:29:31.061921    8040 out.go:298] Setting JSON to false
	I0729 03:29:31.061934    8040 mustload.go:65] Loading cluster: multinode-242000
	I0729 03:29:31.061981    8040 notify.go:220] Checking for updates...
	I0729 03:29:31.062183    8040 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:29:31.062196    8040 status.go:255] checking status of multinode-242000 ...
	I0729 03:29:31.062473    8040 status.go:330] multinode-242000 host status = "Stopped" (err=<nil>)
	I0729 03:29:31.062478    8040 status.go:343] host is not running, skipping remaining checks
	I0729 03:29:31.062481    8040 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (72.802375ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:29:35.680674    8042 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:29:35.680873    8042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:35.680881    8042 out.go:304] Setting ErrFile to fd 2...
	I0729 03:29:35.680885    8042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:35.681054    8042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:29:35.681235    8042 out.go:298] Setting JSON to false
	I0729 03:29:35.681249    8042 mustload.go:65] Loading cluster: multinode-242000
	I0729 03:29:35.681306    8042 notify.go:220] Checking for updates...
	I0729 03:29:35.681552    8042 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:29:35.681561    8042 status.go:255] checking status of multinode-242000 ...
	I0729 03:29:35.681874    8042 status.go:330] multinode-242000 host status = "Stopped" (err=<nil>)
	I0729 03:29:35.681879    8042 status.go:343] host is not running, skipping remaining checks
	I0729 03:29:35.681882    8042 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (75.2025ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:29:39.505106    8044 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:29:39.505308    8044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:39.505313    8044 out.go:304] Setting ErrFile to fd 2...
	I0729 03:29:39.505316    8044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:39.505499    8044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:29:39.505639    8044 out.go:298] Setting JSON to false
	I0729 03:29:39.505651    8044 mustload.go:65] Loading cluster: multinode-242000
	I0729 03:29:39.505695    8044 notify.go:220] Checking for updates...
	I0729 03:29:39.505889    8044 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:29:39.505897    8044 status.go:255] checking status of multinode-242000 ...
	I0729 03:29:39.506162    8044 status.go:330] multinode-242000 host status = "Stopped" (err=<nil>)
	I0729 03:29:39.506167    8044 status.go:343] host is not running, skipping remaining checks
	I0729 03:29:39.506173    8044 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (78.054459ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:29:48.737546    8046 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:29:48.737755    8046 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:48.737759    8046 out.go:304] Setting ErrFile to fd 2...
	I0729 03:29:48.737765    8046 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:48.737938    8046 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:29:48.738142    8046 out.go:298] Setting JSON to false
	I0729 03:29:48.738156    8046 mustload.go:65] Loading cluster: multinode-242000
	I0729 03:29:48.738198    8046 notify.go:220] Checking for updates...
	I0729 03:29:48.738409    8046 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:29:48.738418    8046 status.go:255] checking status of multinode-242000 ...
	I0729 03:29:48.738717    8046 status.go:330] multinode-242000 host status = "Stopped" (err=<nil>)
	I0729 03:29:48.738722    8046 status.go:343] host is not running, skipping remaining checks
	I0729 03:29:48.738725    8046 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (75.105375ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:29:54.960893    8048 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:29:54.961099    8048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:54.961103    8048 out.go:304] Setting ErrFile to fd 2...
	I0729 03:29:54.961106    8048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:29:54.961294    8048 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:29:54.961467    8048 out.go:298] Setting JSON to false
	I0729 03:29:54.961479    8048 mustload.go:65] Loading cluster: multinode-242000
	I0729 03:29:54.961511    8048 notify.go:220] Checking for updates...
	I0729 03:29:54.961743    8048 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:29:54.961752    8048 status.go:255] checking status of multinode-242000 ...
	I0729 03:29:54.962041    8048 status.go:330] multinode-242000 host status = "Stopped" (err=<nil>)
	I0729 03:29:54.962046    8048 status.go:343] host is not running, skipping remaining checks
	I0729 03:29:54.962049    8048 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr: exit status 7 (71.772167ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:30:19.886907    8315 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:30:19.887101    8315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:19.887105    8315 out.go:304] Setting ErrFile to fd 2...
	I0729 03:30:19.887109    8315 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:19.887281    8315 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:30:19.887440    8315 out.go:298] Setting JSON to false
	I0729 03:30:19.887453    8315 mustload.go:65] Loading cluster: multinode-242000
	I0729 03:30:19.887497    8315 notify.go:220] Checking for updates...
	I0729 03:30:19.887717    8315 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:30:19.887728    8315 status.go:255] checking status of multinode-242000 ...
	I0729 03:30:19.888000    8315 status.go:330] multinode-242000 host status = "Stopped" (err=<nil>)
	I0729 03:30:19.888005    8315 status.go:343] host is not running, skipping remaining checks
	I0729 03:30:19.888008    8315 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-242000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (33.079042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (52.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-242000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-242000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-242000: (1.848862333s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-242000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-242000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.227290083s)

                                                
                                                
-- stdout --
	* [multinode-242000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-242000" primary control-plane node in "multinode-242000" cluster
	* Restarting existing qemu2 VM for "multinode-242000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-242000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:30:21.866656    8331 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:30:21.866845    8331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:21.866849    8331 out.go:304] Setting ErrFile to fd 2...
	I0729 03:30:21.866853    8331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:21.867043    8331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:30:21.868373    8331 out.go:298] Setting JSON to false
	I0729 03:30:21.888053    8331 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5390,"bootTime":1722243631,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:30:21.888143    8331 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:30:21.893434    8331 out.go:177] * [multinode-242000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:30:21.897262    8331 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:30:21.897311    8331 notify.go:220] Checking for updates...
	I0729 03:30:21.906324    8331 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:30:21.909265    8331 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:30:21.913265    8331 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:30:21.916297    8331 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:30:21.919265    8331 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:30:21.922553    8331 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:30:21.922602    8331 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:30:21.926306    8331 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:30:21.933313    8331 start.go:297] selected driver: qemu2
	I0729 03:30:21.933321    8331 start.go:901] validating driver "qemu2" against &{Name:multinode-242000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-242000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:30:21.933396    8331 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:30:21.935956    8331 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:30:21.936000    8331 cni.go:84] Creating CNI manager for ""
	I0729 03:30:21.936007    8331 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 03:30:21.936061    8331 start.go:340] cluster config:
	{Name:multinode-242000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-242000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:30:21.939972    8331 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:30:21.948256    8331 out.go:177] * Starting "multinode-242000" primary control-plane node in "multinode-242000" cluster
	I0729 03:30:21.952314    8331 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:30:21.952331    8331 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:30:21.952341    8331 cache.go:56] Caching tarball of preloaded images
	I0729 03:30:21.952401    8331 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:30:21.952407    8331 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:30:21.952476    8331 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/multinode-242000/config.json ...
	I0729 03:30:21.952927    8331 start.go:360] acquireMachinesLock for multinode-242000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:30:21.952965    8331 start.go:364] duration metric: took 31µs to acquireMachinesLock for "multinode-242000"
	I0729 03:30:21.952975    8331 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:30:21.952981    8331 fix.go:54] fixHost starting: 
	I0729 03:30:21.953102    8331 fix.go:112] recreateIfNeeded on multinode-242000: state=Stopped err=<nil>
	W0729 03:30:21.953111    8331 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:30:21.957282    8331 out.go:177] * Restarting existing qemu2 VM for "multinode-242000" ...
	I0729 03:30:21.965214    8331 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:30:21.965264    8331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:52:53:05:3c:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2
	I0729 03:30:21.967437    8331 main.go:141] libmachine: STDOUT: 
	I0729 03:30:21.967458    8331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:30:21.967486    8331 fix.go:56] duration metric: took 14.504292ms for fixHost
	I0729 03:30:21.967492    8331 start.go:83] releasing machines lock for "multinode-242000", held for 14.522958ms
	W0729 03:30:21.967497    8331 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:30:21.967533    8331 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:30:21.967538    8331 start.go:729] Will try again in 5 seconds ...
	I0729 03:30:26.969665    8331 start.go:360] acquireMachinesLock for multinode-242000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:30:26.970011    8331 start.go:364] duration metric: took 273.417µs to acquireMachinesLock for "multinode-242000"
	I0729 03:30:26.970125    8331 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:30:26.970143    8331 fix.go:54] fixHost starting: 
	I0729 03:30:26.970881    8331 fix.go:112] recreateIfNeeded on multinode-242000: state=Stopped err=<nil>
	W0729 03:30:26.970908    8331 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:30:26.980259    8331 out.go:177] * Restarting existing qemu2 VM for "multinode-242000" ...
	I0729 03:30:26.983321    8331 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:30:26.983539    8331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:52:53:05:3c:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2
	I0729 03:30:26.992745    8331 main.go:141] libmachine: STDOUT: 
	I0729 03:30:26.992829    8331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:30:26.992922    8331 fix.go:56] duration metric: took 22.778084ms for fixHost
	I0729 03:30:26.992948    8331 start.go:83] releasing machines lock for "multinode-242000", held for 22.913292ms
	W0729 03:30:26.993155    8331 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-242000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-242000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:30:27.000345    8331 out.go:177] 
	W0729 03:30:27.004273    8331 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:30:27.004303    8331 out.go:239] * 
	* 
	W0729 03:30:27.006894    8331 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:30:27.015299    8331 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-242000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-242000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (32.538333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 node delete m03: exit status 83 (41.825875ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-242000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-242000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-242000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 status --alsologtostderr: exit status 7 (29.831167ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:30:27.201107    8347 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:30:27.201263    8347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:27.201266    8347 out.go:304] Setting ErrFile to fd 2...
	I0729 03:30:27.201269    8347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:27.201406    8347 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:30:27.201516    8347 out.go:298] Setting JSON to false
	I0729 03:30:27.201526    8347 mustload.go:65] Loading cluster: multinode-242000
	I0729 03:30:27.201578    8347 notify.go:220] Checking for updates...
	I0729 03:30:27.201743    8347 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:30:27.201749    8347 status.go:255] checking status of multinode-242000 ...
	I0729 03:30:27.201953    8347 status.go:330] multinode-242000 host status = "Stopped" (err=<nil>)
	I0729 03:30:27.201956    8347 status.go:343] host is not running, skipping remaining checks
	I0729 03:30:27.201958    8347 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-242000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (29.770875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (4.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-242000 stop: (4.023999417s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 status: exit status 7 (66.317208ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-242000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-242000 status --alsologtostderr: exit status 7 (32.42875ms)

                                                
                                                
-- stdout --
	multinode-242000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:30:31.354212    8373 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:30:31.354365    8373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:31.354368    8373 out.go:304] Setting ErrFile to fd 2...
	I0729 03:30:31.354370    8373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:31.354486    8373 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:30:31.354607    8373 out.go:298] Setting JSON to false
	I0729 03:30:31.354616    8373 mustload.go:65] Loading cluster: multinode-242000
	I0729 03:30:31.354674    8373 notify.go:220] Checking for updates...
	I0729 03:30:31.354797    8373 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:30:31.354804    8373 status.go:255] checking status of multinode-242000 ...
	I0729 03:30:31.355010    8373 status.go:330] multinode-242000 host status = "Stopped" (err=<nil>)
	I0729 03:30:31.355014    8373 status.go:343] host is not running, skipping remaining checks
	I0729 03:30:31.355016    8373 status.go:257] multinode-242000 status: &{Name:multinode-242000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-242000 status --alsologtostderr": multinode-242000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-242000 status --alsologtostderr": multinode-242000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (29.659417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (4.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-242000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-242000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.183568125s)

                                                
                                                
-- stdout --
	* [multinode-242000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-242000" primary control-plane node in "multinode-242000" cluster
	* Restarting existing qemu2 VM for "multinode-242000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-242000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:30:31.413357    8377 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:30:31.413472    8377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:31.413475    8377 out.go:304] Setting ErrFile to fd 2...
	I0729 03:30:31.413477    8377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:31.413590    8377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:30:31.414648    8377 out.go:298] Setting JSON to false
	I0729 03:30:31.430759    8377 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5400,"bootTime":1722243631,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:30:31.430829    8377 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:30:31.435261    8377 out.go:177] * [multinode-242000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:30:31.443233    8377 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:30:31.443284    8377 notify.go:220] Checking for updates...
	I0729 03:30:31.451198    8377 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:30:31.454153    8377 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:30:31.457145    8377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:30:31.460171    8377 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:30:31.461587    8377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:30:31.464375    8377 config.go:182] Loaded profile config "multinode-242000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:30:31.464627    8377 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:30:31.469217    8377 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:30:31.474187    8377 start.go:297] selected driver: qemu2
	I0729 03:30:31.474195    8377 start.go:901] validating driver "qemu2" against &{Name:multinode-242000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-242000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:30:31.474275    8377 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:30:31.476464    8377 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:30:31.476502    8377 cni.go:84] Creating CNI manager for ""
	I0729 03:30:31.476506    8377 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 03:30:31.476542    8377 start.go:340] cluster config:
	{Name:multinode-242000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-242000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:30:31.479967    8377 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:30:31.488146    8377 out.go:177] * Starting "multinode-242000" primary control-plane node in "multinode-242000" cluster
	I0729 03:30:31.492134    8377 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:30:31.492151    8377 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:30:31.492164    8377 cache.go:56] Caching tarball of preloaded images
	I0729 03:30:31.492224    8377 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:30:31.492230    8377 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:30:31.492295    8377 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/multinode-242000/config.json ...
	I0729 03:30:31.492769    8377 start.go:360] acquireMachinesLock for multinode-242000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:30:31.492805    8377 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "multinode-242000"
	I0729 03:30:31.492815    8377 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:30:31.492821    8377 fix.go:54] fixHost starting: 
	I0729 03:30:31.492938    8377 fix.go:112] recreateIfNeeded on multinode-242000: state=Stopped err=<nil>
	W0729 03:30:31.492949    8377 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:30:31.501148    8377 out.go:177] * Restarting existing qemu2 VM for "multinode-242000" ...
	I0729 03:30:31.505167    8377 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:30:31.505203    8377 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:52:53:05:3c:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2
	I0729 03:30:31.507335    8377 main.go:141] libmachine: STDOUT: 
	I0729 03:30:31.507360    8377 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:30:31.507390    8377 fix.go:56] duration metric: took 14.568041ms for fixHost
	I0729 03:30:31.507395    8377 start.go:83] releasing machines lock for "multinode-242000", held for 14.586458ms
	W0729 03:30:31.507401    8377 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:30:31.507443    8377 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:30:31.507448    8377 start.go:729] Will try again in 5 seconds ...
	I0729 03:30:36.509509    8377 start.go:360] acquireMachinesLock for multinode-242000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:30:36.509874    8377 start.go:364] duration metric: took 267.542µs to acquireMachinesLock for "multinode-242000"
	I0729 03:30:36.510011    8377 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:30:36.510029    8377 fix.go:54] fixHost starting: 
	I0729 03:30:36.510716    8377 fix.go:112] recreateIfNeeded on multinode-242000: state=Stopped err=<nil>
	W0729 03:30:36.510741    8377 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:30:36.520238    8377 out.go:177] * Restarting existing qemu2 VM for "multinode-242000" ...
	I0729 03:30:36.524005    8377 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:30:36.524193    8377 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:52:53:05:3c:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/multinode-242000/disk.qcow2
	I0729 03:30:36.533168    8377 main.go:141] libmachine: STDOUT: 
	I0729 03:30:36.533261    8377 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:30:36.533331    8377 fix.go:56] duration metric: took 23.300375ms for fixHost
	I0729 03:30:36.533349    8377 start.go:83] releasing machines lock for "multinode-242000", held for 23.456459ms
	W0729 03:30:36.533465    8377 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-242000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-242000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:30:36.540163    8377 out.go:177] 
	W0729 03:30:36.544185    8377 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:30:36.544241    8377 out.go:239] * 
	* 
	W0729 03:30:36.546963    8377 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:30:36.556173    8377 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-242000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (65.825708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-242000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-242000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-242000-m01 --driver=qemu2 : exit status 80 (10.070630875s)

                                                
                                                
-- stdout --
	* [multinode-242000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-242000-m01" primary control-plane node in "multinode-242000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-242000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-242000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-242000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-242000-m02 --driver=qemu2 : exit status 80 (10.117384209s)

                                                
                                                
-- stdout --
	* [multinode-242000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-242000-m02" primary control-plane node in "multinode-242000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-242000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-242000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-242000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-242000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-242000: exit status 83 (77.594542ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-242000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-242000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-242000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-242000 -n multinode-242000: exit status 7 (30.220625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-242000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.41s)

                                                
                                    
x
+
TestPreload (10.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-307000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-307000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.925727833s)

                                                
                                                
-- stdout --
	* [test-preload-307000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-307000" primary control-plane node in "test-preload-307000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-307000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:30:57.183632    8430 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:30:57.183774    8430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:57.183778    8430 out.go:304] Setting ErrFile to fd 2...
	I0729 03:30:57.183780    8430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:57.183912    8430 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:30:57.184984    8430 out.go:298] Setting JSON to false
	I0729 03:30:57.201049    8430 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5426,"bootTime":1722243631,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:30:57.201115    8430 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:30:57.206432    8430 out.go:177] * [test-preload-307000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:30:57.214422    8430 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:30:57.214500    8430 notify.go:220] Checking for updates...
	I0729 03:30:57.221260    8430 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:30:57.224256    8430 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:30:57.227254    8430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:30:57.230272    8430 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:30:57.233268    8430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:30:57.235228    8430 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:30:57.235279    8430 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:30:57.239259    8430 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:30:57.246100    8430 start.go:297] selected driver: qemu2
	I0729 03:30:57.246107    8430 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:30:57.246113    8430 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:30:57.248303    8430 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:30:57.252242    8430 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:30:57.255371    8430 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:30:57.255416    8430 cni.go:84] Creating CNI manager for ""
	I0729 03:30:57.255424    8430 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:30:57.255435    8430 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:30:57.255482    8430 start.go:340] cluster config:
	{Name:test-preload-307000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-307000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:30:57.259434    8430 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:30:57.268275    8430 out.go:177] * Starting "test-preload-307000" primary control-plane node in "test-preload-307000" cluster
	I0729 03:30:57.272308    8430 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0729 03:30:57.272394    8430 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/test-preload-307000/config.json ...
	I0729 03:30:57.272413    8430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/test-preload-307000/config.json: {Name:mkd39e260540d4631cc3dce01c64ab1a1bb875aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:30:57.272446    8430 cache.go:107] acquiring lock: {Name:mk681ae5ae521c0d2b3c927413e664e4d53c0688 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:30:57.272444    8430 cache.go:107] acquiring lock: {Name:mk44c8e8bff79c2c693a53299c9699d4b770669c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:30:57.272462    8430 cache.go:107] acquiring lock: {Name:mkc0cab581ac97043877e3b27297ba34034953e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:30:57.272473    8430 cache.go:107] acquiring lock: {Name:mkd522d13f7a956985832f17526332b5e9087c1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:30:57.272688    8430 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 03:30:57.272698    8430 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 03:30:57.272682    8430 cache.go:107] acquiring lock: {Name:mkb7afa3e1bbd46e15dc5aea80678423d85b9b5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:30:57.272710    8430 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 03:30:57.272702    8430 cache.go:107] acquiring lock: {Name:mk2873f37fed3e9eee913e310587f85f247fd23a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:30:57.272734    8430 cache.go:107] acquiring lock: {Name:mk36bda274fbd0ff45b09a222b6e6a5a43816d9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:30:57.272728    8430 cache.go:107] acquiring lock: {Name:mkcecd5fb0259366096f0a38e66cef4006c638f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:30:57.272808    8430 start.go:360] acquireMachinesLock for test-preload-307000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:30:57.272886    8430 start.go:364] duration metric: took 62.208µs to acquireMachinesLock for "test-preload-307000"
	I0729 03:30:57.272904    8430 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 03:30:57.272970    8430 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 03:30:57.272923    8430 start.go:93] Provisioning new machine with config: &{Name:test-preload-307000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-307000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:30:57.272999    8430 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 03:30:57.273003    8430 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:30:57.273071    8430 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:30:57.273080    8430 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:30:57.281193    8430 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:30:57.284488    8430 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 03:30:57.284629    8430 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 03:30:57.285334    8430 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:30:57.285351    8430 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 03:30:57.286959    8430 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 03:30:57.287005    8430 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 03:30:57.287046    8430 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:30:57.287070    8430 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 03:30:57.299964    8430 start.go:159] libmachine.API.Create for "test-preload-307000" (driver="qemu2")
	I0729 03:30:57.299982    8430 client.go:168] LocalClient.Create starting
	I0729 03:30:57.300116    8430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:30:57.300146    8430 main.go:141] libmachine: Decoding PEM data...
	I0729 03:30:57.300155    8430 main.go:141] libmachine: Parsing certificate...
	I0729 03:30:57.300189    8430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:30:57.300211    8430 main.go:141] libmachine: Decoding PEM data...
	I0729 03:30:57.300219    8430 main.go:141] libmachine: Parsing certificate...
	I0729 03:30:57.300626    8430 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:30:57.457887    8430 main.go:141] libmachine: Creating SSH key...
	I0729 03:30:57.679265    8430 cache.go:162] opening:  /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0729 03:30:57.681013    8430 main.go:141] libmachine: Creating Disk image...
	I0729 03:30:57.681024    8430 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:30:57.681238    8430 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/disk.qcow2
	I0729 03:30:57.688978    8430 cache.go:162] opening:  /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 03:30:57.690816    8430 main.go:141] libmachine: STDOUT: 
	I0729 03:30:57.690824    8430 main.go:141] libmachine: STDERR: 
	I0729 03:30:57.690868    8430 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/disk.qcow2 +20000M
	I0729 03:30:57.699255    8430 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:30:57.699267    8430 main.go:141] libmachine: STDERR: 
	I0729 03:30:57.699277    8430 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/disk.qcow2
	I0729 03:30:57.699280    8430 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:30:57.699293    8430 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:30:57.699322    8430 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:21:af:61:0d:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/disk.qcow2
	I0729 03:30:57.701006    8430 main.go:141] libmachine: STDOUT: 
	I0729 03:30:57.701023    8430 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:30:57.701039    8430 client.go:171] duration metric: took 401.061666ms to LocalClient.Create
	I0729 03:30:57.713442    8430 cache.go:162] opening:  /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 03:30:57.732763    8430 cache.go:162] opening:  /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0729 03:30:57.757181    8430 cache.go:162] opening:  /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0729 03:30:57.797582    8430 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 03:30:57.797609    8430 cache.go:162] opening:  /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 03:30:57.800566    8430 cache.go:162] opening:  /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0729 03:30:57.846705    8430 cache.go:157] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0729 03:30:57.846722    8430 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 574.275542ms
	I0729 03:30:57.846735    8430 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0729 03:30:58.027711    8430 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 03:30:58.027815    8430 cache.go:162] opening:  /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 03:30:58.305394    8430 cache.go:157] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 03:30:58.305462    8430 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.033030875s
	I0729 03:30:58.305488    8430 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 03:30:59.537817    8430 cache.go:157] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0729 03:30:59.537887    8430 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.265314375s
	I0729 03:30:59.537914    8430 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0729 03:30:59.568055    8430 cache.go:157] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0729 03:30:59.568114    8430 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.295714209s
	I0729 03:30:59.568142    8430 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0729 03:30:59.701183    8430 start.go:128] duration metric: took 2.428201708s to createHost
	I0729 03:30:59.701222    8430 start.go:83] releasing machines lock for "test-preload-307000", held for 2.428369083s
	W0729 03:30:59.701284    8430 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:30:59.720527    8430 out.go:177] * Deleting "test-preload-307000" in qemu2 ...
	W0729 03:30:59.747407    8430 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:30:59.747440    8430 start.go:729] Will try again in 5 seconds ...
	I0729 03:31:02.013406    8430 cache.go:157] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0729 03:31:02.013461    8430 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.741074041s
	I0729 03:31:02.013483    8430 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0729 03:31:02.083339    8430 cache.go:157] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0729 03:31:02.083397    8430 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.810749375s
	I0729 03:31:02.083429    8430 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0729 03:31:03.549218    8430 cache.go:157] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0729 03:31:03.549266    8430 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.276709833s
	I0729 03:31:03.549291    8430 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0729 03:31:04.747700    8430 start.go:360] acquireMachinesLock for test-preload-307000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:31:04.748169    8430 start.go:364] duration metric: took 392.5µs to acquireMachinesLock for "test-preload-307000"
	I0729 03:31:04.748314    8430 start.go:93] Provisioning new machine with config: &{Name:test-preload-307000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-307000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:31:04.748543    8430 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:31:04.760193    8430 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:31:04.811745    8430 start.go:159] libmachine.API.Create for "test-preload-307000" (driver="qemu2")
	I0729 03:31:04.811802    8430 client.go:168] LocalClient.Create starting
	I0729 03:31:04.811945    8430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:31:04.812015    8430 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:04.812037    8430 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:04.812120    8430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:31:04.812164    8430 main.go:141] libmachine: Decoding PEM data...
	I0729 03:31:04.812183    8430 main.go:141] libmachine: Parsing certificate...
	I0729 03:31:04.812697    8430 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:31:04.985537    8430 main.go:141] libmachine: Creating SSH key...
	I0729 03:31:05.018539    8430 main.go:141] libmachine: Creating Disk image...
	I0729 03:31:05.018544    8430 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:31:05.018742    8430 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/disk.qcow2
	I0729 03:31:05.027905    8430 main.go:141] libmachine: STDOUT: 
	I0729 03:31:05.027925    8430 main.go:141] libmachine: STDERR: 
	I0729 03:31:05.027981    8430 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/disk.qcow2 +20000M
	I0729 03:31:05.036009    8430 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:31:05.036026    8430 main.go:141] libmachine: STDERR: 
	I0729 03:31:05.036039    8430 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/disk.qcow2
	I0729 03:31:05.036051    8430 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:31:05.036060    8430 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:31:05.036093    8430 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:eb:f5:e8:48:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/test-preload-307000/disk.qcow2
	I0729 03:31:05.037769    8430 main.go:141] libmachine: STDOUT: 
	I0729 03:31:05.037783    8430 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:31:05.037802    8430 client.go:171] duration metric: took 225.999625ms to LocalClient.Create
	I0729 03:31:05.875260    8430 cache.go:157] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0729 03:31:05.875336    8430 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.602843042s
	I0729 03:31:05.875377    8430 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0729 03:31:05.875455    8430 cache.go:87] Successfully saved all images to host disk.
	I0729 03:31:07.040058    8430 start.go:128] duration metric: took 2.291466667s to createHost
	I0729 03:31:07.040107    8430 start.go:83] releasing machines lock for "test-preload-307000", held for 2.29195525s
	W0729 03:31:07.040405    8430 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-307000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-307000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:31:07.050798    8430 out.go:177] 
	W0729 03:31:07.054962    8430 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:31:07.054984    8430 out.go:239] * 
	* 
	W0729 03:31:07.057754    8430 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:31:07.065637    8430 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-307000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-29 03:31:07.084665 -0700 PDT m=+707.480929251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-307000 -n test-preload-307000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-307000 -n test-preload-307000: exit status 7 (65.161292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-307000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-307000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-307000
--- FAIL: TestPreload (10.07s)

                                                
                                    
x
+
TestScheduledStopUnix (9.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-959000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-959000 --memory=2048 --driver=qemu2 : exit status 80 (9.771875708s)

                                                
                                                
-- stdout --
	* [scheduled-stop-959000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-959000" primary control-plane node in "scheduled-stop-959000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-959000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-959000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-959000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-959000" primary control-plane node in "scheduled-stop-959000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-959000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-959000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-29 03:31:17.000366 -0700 PDT m=+717.396822210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-959000 -n scheduled-stop-959000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-959000 -n scheduled-stop-959000: exit status 7 (69.004958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-959000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-959000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-959000
--- FAIL: TestScheduledStopUnix (9.92s)

                                                
                                    
x
+
TestSkaffold (12.32s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2649167769 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2649167769 version: (1.070543208s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-353000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-353000 --memory=2600 --driver=qemu2 : exit status 80 (10.003808s)

                                                
                                                
-- stdout --
	* [skaffold-353000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-353000" primary control-plane node in "skaffold-353000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-353000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-353000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-353000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-353000" primary control-plane node in "skaffold-353000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-353000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-353000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-29 03:31:29.321031 -0700 PDT m=+729.717726043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-353000 -n skaffold-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-353000 -n skaffold-353000: exit status 7 (62.111625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-353000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-353000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-353000
--- FAIL: TestSkaffold (12.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (588.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3067320143 start -p running-upgrade-376000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3067320143 start -p running-upgrade-376000 --memory=2200 --vm-driver=qemu2 : (50.883128291s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-376000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-376000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m24.671884708s)

                                                
                                                
-- stdout --
	* [running-upgrade-376000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-376000" primary control-plane node in "running-upgrade-376000" cluster
	* Updating the running qemu2 "running-upgrade-376000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:33:02.155787    8811 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:33:02.155930    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:33:02.155934    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:33:02.155936    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:33:02.156067    8811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:33:02.157277    8811 out.go:298] Setting JSON to false
	I0729 03:33:02.174821    8811 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5551,"bootTime":1722243631,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:33:02.174895    8811 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:33:02.180701    8811 out.go:177] * [running-upgrade-376000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:33:02.188707    8811 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:33:02.188760    8811 notify.go:220] Checking for updates...
	I0729 03:33:02.196656    8811 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:33:02.200857    8811 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:33:02.204622    8811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:33:02.207642    8811 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:33:02.210675    8811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:33:02.213967    8811 config.go:182] Loaded profile config "running-upgrade-376000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:33:02.217608    8811 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 03:33:02.220760    8811 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:33:02.224693    8811 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:33:02.231650    8811 start.go:297] selected driver: qemu2
	I0729 03:33:02.231658    8811 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-376000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51263 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 03:33:02.231707    8811 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:33:02.233967    8811 cni.go:84] Creating CNI manager for ""
	I0729 03:33:02.233988    8811 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:33:02.234014    8811 start.go:340] cluster config:
	{Name:running-upgrade-376000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51263 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 03:33:02.234068    8811 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:33:02.242647    8811 out.go:177] * Starting "running-upgrade-376000" primary control-plane node in "running-upgrade-376000" cluster
	I0729 03:33:02.246496    8811 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 03:33:02.246513    8811 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 03:33:02.246528    8811 cache.go:56] Caching tarball of preloaded images
	I0729 03:33:02.246586    8811 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:33:02.246592    8811 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 03:33:02.246652    8811 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/config.json ...
	I0729 03:33:02.247126    8811 start.go:360] acquireMachinesLock for running-upgrade-376000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:33:02.247158    8811 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "running-upgrade-376000"
	I0729 03:33:02.247168    8811 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:33:02.247173    8811 fix.go:54] fixHost starting: 
	I0729 03:33:02.247853    8811 fix.go:112] recreateIfNeeded on running-upgrade-376000: state=Running err=<nil>
	W0729 03:33:02.247864    8811 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:33:02.251693    8811 out.go:177] * Updating the running qemu2 "running-upgrade-376000" VM ...
	I0729 03:33:02.261617    8811 machine.go:94] provisionDockerMachine start ...
	I0729 03:33:02.261669    8811 main.go:141] libmachine: Using SSH client type: native
	I0729 03:33:02.261792    8811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dc6a10] 0x104dc9270 <nil>  [] 0s} localhost 51231 <nil> <nil>}
	I0729 03:33:02.261796    8811 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 03:33:02.325116    8811 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-376000
	
	I0729 03:33:02.325131    8811 buildroot.go:166] provisioning hostname "running-upgrade-376000"
	I0729 03:33:02.325184    8811 main.go:141] libmachine: Using SSH client type: native
	I0729 03:33:02.325287    8811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dc6a10] 0x104dc9270 <nil>  [] 0s} localhost 51231 <nil> <nil>}
	I0729 03:33:02.325292    8811 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-376000 && echo "running-upgrade-376000" | sudo tee /etc/hostname
	I0729 03:33:02.386154    8811 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-376000
	
	I0729 03:33:02.386202    8811 main.go:141] libmachine: Using SSH client type: native
	I0729 03:33:02.386308    8811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dc6a10] 0x104dc9270 <nil>  [] 0s} localhost 51231 <nil> <nil>}
	I0729 03:33:02.386316    8811 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-376000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-376000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-376000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 03:33:02.445688    8811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 03:33:02.445707    8811 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19337-6349/.minikube CaCertPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19337-6349/.minikube}
	I0729 03:33:02.445718    8811 buildroot.go:174] setting up certificates
	I0729 03:33:02.445723    8811 provision.go:84] configureAuth start
	I0729 03:33:02.445729    8811 provision.go:143] copyHostCerts
	I0729 03:33:02.445824    8811 exec_runner.go:144] found /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.pem, removing ...
	I0729 03:33:02.445829    8811 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.pem
	I0729 03:33:02.445964    8811 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.pem (1082 bytes)
	I0729 03:33:02.446141    8811 exec_runner.go:144] found /Users/jenkins/minikube-integration/19337-6349/.minikube/cert.pem, removing ...
	I0729 03:33:02.446144    8811 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19337-6349/.minikube/cert.pem
	I0729 03:33:02.446187    8811 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19337-6349/.minikube/cert.pem (1123 bytes)
	I0729 03:33:02.446298    8811 exec_runner.go:144] found /Users/jenkins/minikube-integration/19337-6349/.minikube/key.pem, removing ...
	I0729 03:33:02.446302    8811 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19337-6349/.minikube/key.pem
	I0729 03:33:02.446341    8811 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19337-6349/.minikube/key.pem (1679 bytes)
	I0729 03:33:02.446425    8811 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-376000 san=[127.0.0.1 localhost minikube running-upgrade-376000]
	I0729 03:33:02.517961    8811 provision.go:177] copyRemoteCerts
	I0729 03:33:02.518005    8811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 03:33:02.518014    8811 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/running-upgrade-376000/id_rsa Username:docker}
	I0729 03:33:02.551855    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 03:33:02.559124    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 03:33:02.565916    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 03:33:02.572953    8811 provision.go:87] duration metric: took 127.228292ms to configureAuth
	I0729 03:33:02.572963    8811 buildroot.go:189] setting minikube options for container-runtime
	I0729 03:33:02.573096    8811 config.go:182] Loaded profile config "running-upgrade-376000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:33:02.573132    8811 main.go:141] libmachine: Using SSH client type: native
	I0729 03:33:02.573243    8811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dc6a10] 0x104dc9270 <nil>  [] 0s} localhost 51231 <nil> <nil>}
	I0729 03:33:02.573248    8811 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 03:33:02.634728    8811 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 03:33:02.634736    8811 buildroot.go:70] root file system type: tmpfs
	I0729 03:33:02.634788    8811 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 03:33:02.634836    8811 main.go:141] libmachine: Using SSH client type: native
	I0729 03:33:02.634942    8811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dc6a10] 0x104dc9270 <nil>  [] 0s} localhost 51231 <nil> <nil>}
	I0729 03:33:02.634974    8811 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 03:33:02.698256    8811 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 03:33:02.698307    8811 main.go:141] libmachine: Using SSH client type: native
	I0729 03:33:02.698417    8811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dc6a10] 0x104dc9270 <nil>  [] 0s} localhost 51231 <nil> <nil>}
	I0729 03:33:02.698425    8811 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 03:33:02.758180    8811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 03:33:02.758193    8811 machine.go:97] duration metric: took 496.579417ms to provisionDockerMachine
	I0729 03:33:02.758198    8811 start.go:293] postStartSetup for "running-upgrade-376000" (driver="qemu2")
	I0729 03:33:02.758204    8811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 03:33:02.758252    8811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 03:33:02.758262    8811 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/running-upgrade-376000/id_rsa Username:docker}
	I0729 03:33:02.790904    8811 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 03:33:02.792257    8811 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 03:33:02.792262    8811 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19337-6349/.minikube/addons for local assets ...
	I0729 03:33:02.792338    8811 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19337-6349/.minikube/files for local assets ...
	I0729 03:33:02.792429    8811 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19337-6349/.minikube/files/etc/ssl/certs/68432.pem -> 68432.pem in /etc/ssl/certs
	I0729 03:33:02.792535    8811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 03:33:02.795337    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/files/etc/ssl/certs/68432.pem --> /etc/ssl/certs/68432.pem (1708 bytes)
	I0729 03:33:02.803010    8811 start.go:296] duration metric: took 44.806708ms for postStartSetup
	I0729 03:33:02.803024    8811 fix.go:56] duration metric: took 555.862208ms for fixHost
	I0729 03:33:02.803060    8811 main.go:141] libmachine: Using SSH client type: native
	I0729 03:33:02.803163    8811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dc6a10] 0x104dc9270 <nil>  [] 0s} localhost 51231 <nil> <nil>}
	I0729 03:33:02.803167    8811 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 03:33:02.864277    8811 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722249182.696785972
	
	I0729 03:33:02.864287    8811 fix.go:216] guest clock: 1722249182.696785972
	I0729 03:33:02.864292    8811 fix.go:229] Guest: 2024-07-29 03:33:02.696785972 -0700 PDT Remote: 2024-07-29 03:33:02.803025 -0700 PDT m=+0.666736335 (delta=-106.239028ms)
	I0729 03:33:02.864304    8811 fix.go:200] guest clock delta is within tolerance: -106.239028ms
	I0729 03:33:02.864313    8811 start.go:83] releasing machines lock for "running-upgrade-376000", held for 617.156875ms
	I0729 03:33:02.864374    8811 ssh_runner.go:195] Run: cat /version.json
	I0729 03:33:02.864383    8811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 03:33:02.864384    8811 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/running-upgrade-376000/id_rsa Username:docker}
	I0729 03:33:02.864407    8811 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/running-upgrade-376000/id_rsa Username:docker}
	W0729 03:33:02.864981    8811 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:51340->127.0.0.1:51231: read: connection reset by peer
	I0729 03:33:02.865001    8811 retry.go:31] will retry after 152.298185ms: ssh: handshake failed: read tcp 127.0.0.1:51340->127.0.0.1:51231: read: connection reset by peer
	W0729 03:33:02.895680    8811 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 03:33:02.895731    8811 ssh_runner.go:195] Run: systemctl --version
	I0729 03:33:02.897609    8811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 03:33:02.899318    8811 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 03:33:02.899345    8811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 03:33:02.902720    8811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 03:33:02.907026    8811 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 03:33:02.907039    8811 start.go:495] detecting cgroup driver to use...
	I0729 03:33:02.907162    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 03:33:02.912097    8811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 03:33:02.915344    8811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 03:33:02.918076    8811 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 03:33:02.918101    8811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 03:33:02.921196    8811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 03:33:02.924890    8811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 03:33:02.928221    8811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 03:33:02.931459    8811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 03:33:02.934539    8811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 03:33:02.937539    8811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 03:33:02.940801    8811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 03:33:02.944048    8811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 03:33:02.946660    8811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 03:33:02.949504    8811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:33:03.034642    8811 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 03:33:03.041276    8811 start.go:495] detecting cgroup driver to use...
	I0729 03:33:03.041353    8811 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 03:33:03.049895    8811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 03:33:03.101866    8811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 03:33:03.130251    8811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 03:33:03.135178    8811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 03:33:03.139771    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 03:33:03.145318    8811 ssh_runner.go:195] Run: which cri-dockerd
	I0729 03:33:03.146594    8811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 03:33:03.149347    8811 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 03:33:03.155152    8811 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 03:33:03.228828    8811 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 03:33:03.323759    8811 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 03:33:03.323833    8811 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 03:33:03.329124    8811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:33:03.425248    8811 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 03:33:04.951154    8811 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.52592125s)
	I0729 03:33:04.951222    8811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 03:33:04.956084    8811 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 03:33:04.962024    8811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 03:33:04.966744    8811 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 03:33:05.041659    8811 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 03:33:05.122240    8811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:33:05.202183    8811 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 03:33:05.208626    8811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 03:33:05.214098    8811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:33:05.291111    8811 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 03:33:05.330153    8811 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 03:33:05.330218    8811 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 03:33:05.332316    8811 start.go:563] Will wait 60s for crictl version
	I0729 03:33:05.332349    8811 ssh_runner.go:195] Run: which crictl
	I0729 03:33:05.333826    8811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 03:33:05.345572    8811 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 03:33:05.345636    8811 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 03:33:05.358231    8811 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 03:33:05.378851    8811 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 03:33:05.378920    8811 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 03:33:05.380304    8811 kubeadm.go:883] updating cluster {Name:running-upgrade-376000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51263 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 03:33:05.380345    8811 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 03:33:05.380384    8811 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 03:33:05.390681    8811 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 03:33:05.390693    8811 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 03:33:05.390738    8811 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 03:33:05.393762    8811 ssh_runner.go:195] Run: which lz4
	I0729 03:33:05.395098    8811 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 03:33:05.396217    8811 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 03:33:05.396227    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 03:33:06.306862    8811 docker.go:649] duration metric: took 911.803ms to copy over tarball
	I0729 03:33:06.306917    8811 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 03:33:07.493670    8811 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.18676225s)
	I0729 03:33:07.493686    8811 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 03:33:07.509620    8811 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 03:33:07.513088    8811 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 03:33:07.518411    8811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:33:07.598281    8811 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 03:33:07.844003    8811 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 03:33:07.856404    8811 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 03:33:07.856414    8811 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 03:33:07.856425    8811 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 03:33:07.861089    8811 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 03:33:07.863747    8811 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:33:07.865728    8811 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 03:33:07.865801    8811 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 03:33:07.867782    8811 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 03:33:07.867922    8811 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:33:07.868988    8811 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 03:33:07.869104    8811 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:33:07.870205    8811 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 03:33:07.870241    8811 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 03:33:07.871652    8811 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:33:07.871685    8811 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 03:33:07.872786    8811 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 03:33:07.872822    8811 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 03:33:07.873681    8811 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 03:33:07.874204    8811 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 03:33:08.247575    8811 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 03:33:08.250447    8811 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 03:33:08.261275    8811 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 03:33:08.261309    8811 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 03:33:08.261370    8811 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 03:33:08.270435    8811 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 03:33:08.274005    8811 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 03:33:08.274024    8811 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 03:33:08.274073    8811 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 03:33:08.282870    8811 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 03:33:08.283250    8811 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 03:33:08.283266    8811 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 03:33:08.283323    8811 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	W0729 03:33:08.289579    8811 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 03:33:08.289706    8811 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:33:08.290099    8811 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 03:33:08.290196    8811 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 03:33:08.295888    8811 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 03:33:08.300146    8811 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 03:33:08.300156    8811 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 03:33:08.300163    8811 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:33:08.300175    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 03:33:08.300203    8811 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:33:08.313233    8811 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 03:33:08.313346    8811 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 03:33:08.313454    8811 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 03:33:08.317673    8811 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 03:33:08.317682    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 03:33:08.321291    8811 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 03:33:08.327480    8811 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 03:33:08.327497    8811 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 03:33:08.327511    8811 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 03:33:08.327521    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 03:33:08.327550    8811 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 03:33:08.380887    8811 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 03:33:08.380962    8811 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 03:33:08.380979    8811 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 03:33:08.380965    8811 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 03:33:08.381033    8811 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 03:33:08.387151    8811 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 03:33:08.402032    8811 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 03:33:08.402138    8811 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 03:33:08.414498    8811 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 03:33:08.414511    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0729 03:33:08.415792    8811 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 03:33:08.415809    8811 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 03:33:08.415796    8811 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0729 03:33:08.415859    8811 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 03:33:08.415874    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W0729 03:33:08.536652    8811 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 03:33:08.536760    8811 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:33:08.564832    8811 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 03:33:08.564882    8811 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 03:33:08.587288    8811 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 03:33:08.587309    8811 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:33:08.587361    8811 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:33:08.671931    8811 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 03:33:08.671946    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0729 03:33:09.483446    8811 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 03:33:09.483528    8811 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 03:33:09.483965    8811 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 03:33:09.489370    8811 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 03:33:09.489432    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 03:33:09.548732    8811 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 03:33:09.548746    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 03:33:09.786257    8811 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 03:33:09.786291    8811 cache_images.go:92] duration metric: took 1.929897208s to LoadCachedImages
	W0729 03:33:09.786335    8811 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0729 03:33:09.786347    8811 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 03:33:09.786411    8811 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-376000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 03:33:09.786481    8811 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 03:33:09.804408    8811 cni.go:84] Creating CNI manager for ""
	I0729 03:33:09.804419    8811 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:33:09.804424    8811 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 03:33:09.804433    8811 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-376000 NodeName:running-upgrade-376000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 03:33:09.804500    8811 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-376000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 03:33:09.804550    8811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 03:33:09.808107    8811 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 03:33:09.808135    8811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 03:33:09.810775    8811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 03:33:09.815515    8811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 03:33:09.820749    8811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 03:33:09.826549    8811 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 03:33:09.827926    8811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:33:09.901011    8811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 03:33:09.906505    8811 certs.go:68] Setting up /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000 for IP: 10.0.2.15
	I0729 03:33:09.906514    8811 certs.go:194] generating shared ca certs ...
	I0729 03:33:09.906522    8811 certs.go:226] acquiring lock for ca certs: {Name:mk5485201dd0b8c49ea299ac713a7956ec13f382 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:33:09.906753    8811 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.key
	I0729 03:33:09.906787    8811 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/proxy-client-ca.key
	I0729 03:33:09.906792    8811 certs.go:256] generating profile certs ...
	I0729 03:33:09.906865    8811 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/client.key
	I0729 03:33:09.906877    8811 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/apiserver.key.a992f7be
	I0729 03:33:09.906886    8811 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/apiserver.crt.a992f7be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 03:33:10.079068    8811 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/apiserver.crt.a992f7be ...
	I0729 03:33:10.079079    8811 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/apiserver.crt.a992f7be: {Name:mk1552d2cbbc2358078b6060699cc6c27013dd7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:33:10.079367    8811 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/apiserver.key.a992f7be ...
	I0729 03:33:10.079371    8811 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/apiserver.key.a992f7be: {Name:mk440e609730263069f0b0554bb734d4aca88f98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:33:10.079515    8811 certs.go:381] copying /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/apiserver.crt.a992f7be -> /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/apiserver.crt
	I0729 03:33:10.080098    8811 certs.go:385] copying /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/apiserver.key.a992f7be -> /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/apiserver.key
	I0729 03:33:10.080237    8811 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/proxy-client.key
	I0729 03:33:10.080362    8811 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/6843.pem (1338 bytes)
	W0729 03:33:10.080388    8811 certs.go:480] ignoring /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/6843_empty.pem, impossibly tiny 0 bytes
	I0729 03:33:10.080394    8811 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 03:33:10.080415    8811 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem (1082 bytes)
	I0729 03:33:10.080436    8811 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem (1123 bytes)
	I0729 03:33:10.080454    8811 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/key.pem (1679 bytes)
	I0729 03:33:10.080492    8811 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/files/etc/ssl/certs/68432.pem (1708 bytes)
	I0729 03:33:10.080889    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 03:33:10.088220    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 03:33:10.094985    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 03:33:10.101891    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 03:33:10.109307    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 03:33:10.116627    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 03:33:10.123585    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 03:33:10.131068    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 03:33:10.138138    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/files/etc/ssl/certs/68432.pem --> /usr/share/ca-certificates/68432.pem (1708 bytes)
	I0729 03:33:10.145075    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 03:33:10.152120    8811 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/6843.pem --> /usr/share/ca-certificates/6843.pem (1338 bytes)
	I0729 03:33:10.159294    8811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 03:33:10.164212    8811 ssh_runner.go:195] Run: openssl version
	I0729 03:33:10.166102    8811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68432.pem && ln -fs /usr/share/ca-certificates/68432.pem /etc/ssl/certs/68432.pem"
	I0729 03:33:10.169507    8811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/68432.pem
	I0729 03:33:10.170934    8811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:20 /usr/share/ca-certificates/68432.pem
	I0729 03:33:10.170953    8811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68432.pem
	I0729 03:33:10.172546    8811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68432.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 03:33:10.175293    8811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 03:33:10.178043    8811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 03:33:10.179572    8811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0729 03:33:10.179594    8811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 03:33:10.181251    8811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 03:33:10.184228    8811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6843.pem && ln -fs /usr/share/ca-certificates/6843.pem /etc/ssl/certs/6843.pem"
	I0729 03:33:10.187263    8811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6843.pem
	I0729 03:33:10.188604    8811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:20 /usr/share/ca-certificates/6843.pem
	I0729 03:33:10.188624    8811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6843.pem
	I0729 03:33:10.190376    8811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6843.pem /etc/ssl/certs/51391683.0"
	I0729 03:33:10.193230    8811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 03:33:10.194768    8811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 03:33:10.196463    8811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 03:33:10.198170    8811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 03:33:10.199890    8811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 03:33:10.201804    8811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 03:33:10.203787    8811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 03:33:10.205560    8811 kubeadm.go:392] StartCluster: {Name:running-upgrade-376000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51263 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 03:33:10.205628    8811 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 03:33:10.216182    8811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 03:33:10.219464    8811 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 03:33:10.219469    8811 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 03:33:10.219491    8811 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 03:33:10.222232    8811 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 03:33:10.222268    8811 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-376000" does not appear in /Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:33:10.222286    8811 kubeconfig.go:62] /Users/jenkins/minikube-integration/19337-6349/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-376000" cluster setting kubeconfig missing "running-upgrade-376000" context setting]
	I0729 03:33:10.222451    8811 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/kubeconfig: {Name:mk88e6cb321d16f76049e5804261f3b045a9d412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:33:10.223347    8811 kapi.go:59] client config for running-upgrade-376000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/client.key", CAFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10615c080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 03:33:10.224197    8811 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 03:33:10.226896    8811 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-376000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 03:33:10.226902    8811 kubeadm.go:1160] stopping kube-system containers ...
	I0729 03:33:10.226938    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 03:33:10.237909    8811 docker.go:483] Stopping containers: [2cf4d3927589 b2145b64fb44 1645b0733629 99d836757714 2ea8d8b5030a 228f0e7d954c c706c2efe503 86242cc8dea1 1b9c2370c374 d82339fc2408 a0403ae3a425 d283ccfbc778]
	I0729 03:33:10.237985    8811 ssh_runner.go:195] Run: docker stop 2cf4d3927589 b2145b64fb44 1645b0733629 99d836757714 2ea8d8b5030a 228f0e7d954c c706c2efe503 86242cc8dea1 1b9c2370c374 d82339fc2408 a0403ae3a425 d283ccfbc778
	I0729 03:33:10.248480    8811 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 03:33:10.353793    8811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 03:33:10.357790    8811 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Jul 29 10:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 29 10:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 29 10:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jul 29 10:32 /etc/kubernetes/scheduler.conf
	
	I0729 03:33:10.357817    8811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/admin.conf
	I0729 03:33:10.361284    8811 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 03:33:10.361312    8811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 03:33:10.364689    8811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/kubelet.conf
	I0729 03:33:10.367727    8811 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 03:33:10.367747    8811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 03:33:10.370695    8811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/controller-manager.conf
	I0729 03:33:10.373400    8811 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 03:33:10.373421    8811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 03:33:10.376264    8811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/scheduler.conf
	I0729 03:33:10.378777    8811 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 03:33:10.378803    8811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 03:33:10.381364    8811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 03:33:10.384555    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:33:10.405184    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:33:11.016483    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:33:11.219552    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:33:11.240274    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:33:11.260610    8811 api_server.go:52] waiting for apiserver process to appear ...
	I0729 03:33:11.260685    8811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:33:11.763033    8811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:33:12.262709    8811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:33:12.266870    8811 api_server.go:72] duration metric: took 1.006281417s to wait for apiserver process to appear ...
	I0729 03:33:12.266878    8811 api_server.go:88] waiting for apiserver healthz status ...
	I0729 03:33:12.266887    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:33:17.268928    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:33:17.268974    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:33:22.269509    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:33:22.269564    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:33:27.270309    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:33:27.270332    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:33:32.270897    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:33:32.270967    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:33:37.272211    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:33:37.272291    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:33:42.273953    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:33:42.274027    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:33:47.276262    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:33:47.276353    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:33:52.278941    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:33:52.279019    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:33:57.281576    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:33:57.281688    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:34:02.284315    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:34:02.284394    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:34:07.286943    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:34:07.287041    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:34:12.289524    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:34:12.290017    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:34:12.329948    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:34:12.330081    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:34:12.351172    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:34:12.351267    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:34:12.366354    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:34:12.366432    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:34:12.379160    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:34:12.379235    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:34:12.390843    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:34:12.390914    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:34:12.401474    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:34:12.401543    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:34:12.411573    8811 logs.go:276] 0 containers: []
	W0729 03:34:12.411590    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:34:12.411645    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:34:12.422277    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:34:12.422294    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:34:12.422300    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:34:12.494643    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:34:12.494656    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:34:12.509854    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:34:12.509866    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:34:12.521746    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:34:12.521756    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:34:12.537418    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:34:12.537428    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:34:12.576519    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:34:12.576525    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:34:12.580774    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:34:12.580783    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:34:12.596373    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:34:12.596386    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:34:12.608319    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:34:12.608331    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:34:12.620217    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:34:12.620228    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:34:12.634398    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:34:12.634410    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:34:12.646013    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:34:12.646025    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:34:12.657399    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:34:12.657415    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:34:12.676743    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:34:12.676753    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:34:12.702958    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:34:12.702968    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:34:12.715352    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:34:12.715364    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:34:12.736951    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:34:12.736961    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:34:15.253547    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:34:20.256311    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:34:20.256804    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:34:20.294550    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:34:20.294716    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:34:20.317123    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:34:20.317236    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:34:20.331806    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:34:20.331893    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:34:20.344501    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:34:20.344577    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:34:20.355866    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:34:20.355932    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:34:20.366432    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:34:20.366494    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:34:20.376954    8811 logs.go:276] 0 containers: []
	W0729 03:34:20.376967    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:34:20.377027    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:34:20.392058    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:34:20.392084    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:34:20.392088    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:34:20.406411    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:34:20.406420    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:34:20.426320    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:34:20.426331    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:34:20.440677    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:34:20.440687    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:34:20.452011    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:34:20.452020    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:34:20.476611    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:34:20.476619    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:34:20.492272    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:34:20.492284    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:34:20.532067    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:34:20.532076    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:34:20.570740    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:34:20.570750    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:34:20.582621    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:34:20.582634    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:34:20.598406    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:34:20.598416    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:34:20.609696    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:34:20.609709    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:34:20.614587    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:34:20.614597    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:34:20.628345    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:34:20.628356    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:34:20.643722    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:34:20.643733    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:34:20.661217    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:34:20.661228    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:34:20.672188    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:34:20.672198    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:34:23.185091    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:34:28.187893    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:34:28.188308    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:34:28.223736    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:34:28.223875    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:34:28.244158    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:34:28.244260    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:34:28.260129    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:34:28.260212    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:34:28.272574    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:34:28.272646    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:34:28.283559    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:34:28.283623    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:34:28.294976    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:34:28.295045    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:34:28.308086    8811 logs.go:276] 0 containers: []
	W0729 03:34:28.308097    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:34:28.308152    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:34:28.318450    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:34:28.318466    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:34:28.318471    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:34:28.335746    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:34:28.335756    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:34:28.375614    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:34:28.375620    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:34:28.435583    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:34:28.435594    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:34:28.455884    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:34:28.455893    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:34:28.467902    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:34:28.467914    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:34:28.479256    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:34:28.479268    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:34:28.493111    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:34:28.493123    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:34:28.509164    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:34:28.509177    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:34:28.526584    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:34:28.526593    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:34:28.538201    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:34:28.538215    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:34:28.564387    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:34:28.564397    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:34:28.576511    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:34:28.576524    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:34:28.581302    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:34:28.581310    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:34:28.595959    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:34:28.595969    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:34:28.617363    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:34:28.617375    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:34:28.629387    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:34:28.629398    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:34:31.143229    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:34:36.145987    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:34:36.146303    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:34:36.177038    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:34:36.177163    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:34:36.194725    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:34:36.194816    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:34:36.208402    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:34:36.208471    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:34:36.222558    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:34:36.222626    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:34:36.232865    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:34:36.232938    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:34:36.243058    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:34:36.243115    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:34:36.252992    8811 logs.go:276] 0 containers: []
	W0729 03:34:36.253002    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:34:36.253056    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:34:36.263500    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:34:36.263515    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:34:36.263521    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:34:36.274841    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:34:36.274851    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:34:36.298671    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:34:36.298681    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:34:36.322653    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:34:36.322664    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:34:36.348863    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:34:36.348873    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:34:36.360506    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:34:36.360517    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:34:36.399739    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:34:36.399745    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:34:36.404343    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:34:36.404349    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:34:36.438016    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:34:36.438028    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:34:36.451724    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:34:36.451736    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:34:36.466040    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:34:36.466050    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:34:36.476890    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:34:36.476904    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:34:36.496263    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:34:36.496273    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:34:36.510215    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:34:36.510228    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:34:36.527327    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:34:36.527338    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:34:36.542130    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:34:36.542143    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:34:36.554872    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:34:36.554882    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:34:39.073470    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:34:44.076177    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:34:44.076545    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:34:44.117622    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:34:44.117750    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:34:44.134775    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:34:44.134864    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:34:44.148058    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:34:44.148124    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:34:44.164803    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:34:44.164874    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:34:44.175463    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:34:44.175521    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:34:44.186954    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:34:44.187029    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:34:44.202333    8811 logs.go:276] 0 containers: []
	W0729 03:34:44.202346    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:34:44.202398    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:34:44.213155    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:34:44.213173    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:34:44.213178    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:34:44.232701    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:34:44.232713    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:34:44.243683    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:34:44.243694    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:34:44.255422    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:34:44.255434    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:34:44.260048    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:34:44.260057    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:34:44.274081    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:34:44.274089    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:34:44.285383    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:34:44.285392    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:34:44.300482    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:34:44.300493    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:34:44.318043    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:34:44.318053    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:34:44.355344    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:34:44.355351    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:34:44.369918    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:34:44.369929    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:34:44.382618    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:34:44.382631    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:34:44.396682    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:34:44.396694    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:34:44.412946    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:34:44.412956    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:34:44.424245    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:34:44.424255    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:34:44.448334    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:34:44.448346    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:34:44.482997    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:34:44.483010    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:34:46.996792    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:34:51.999566    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:34:52.000020    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:34:52.040907    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:34:52.041044    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:34:52.062471    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:34:52.062580    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:34:52.078677    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:34:52.078744    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:34:52.091221    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:34:52.091290    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:34:52.102127    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:34:52.102196    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:34:52.112857    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:34:52.112915    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:34:52.123151    8811 logs.go:276] 0 containers: []
	W0729 03:34:52.123164    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:34:52.123213    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:34:52.133906    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:34:52.133923    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:34:52.133929    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:34:52.156722    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:34:52.156735    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:34:52.172337    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:34:52.172350    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:34:52.203129    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:34:52.203142    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:34:52.242587    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:34:52.242596    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:34:52.280648    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:34:52.280660    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:34:52.294630    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:34:52.294642    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:34:52.320651    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:34:52.320660    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:34:52.324766    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:34:52.324774    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:34:52.338804    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:34:52.338814    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:34:52.353871    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:34:52.353880    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:34:52.365664    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:34:52.365678    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:34:52.379914    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:34:52.379926    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:34:52.391676    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:34:52.391689    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:34:52.407983    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:34:52.407999    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:34:52.419794    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:34:52.419809    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:34:52.431487    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:34:52.431497    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:34:54.944918    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:34:59.947188    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:34:59.947603    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:34:59.988419    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:34:59.988561    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:35:00.009278    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:35:00.009391    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:35:00.026746    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:35:00.026821    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:35:00.039644    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:35:00.039716    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:35:00.050593    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:35:00.050660    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:35:00.060978    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:35:00.061041    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:35:00.071177    8811 logs.go:276] 0 containers: []
	W0729 03:35:00.071190    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:35:00.071237    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:35:00.085209    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:35:00.085227    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:35:00.085232    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:35:00.096778    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:35:00.096789    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:35:00.108342    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:35:00.108351    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:35:00.123495    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:35:00.123512    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:35:00.134876    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:35:00.134887    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:35:00.174871    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:35:00.174878    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:35:00.188627    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:35:00.188635    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:35:00.208276    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:35:00.208286    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:35:00.223552    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:35:00.223560    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:35:00.239512    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:35:00.239524    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:35:00.263753    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:35:00.263765    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:35:00.286533    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:35:00.286543    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:35:00.300432    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:35:00.300444    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:35:00.304902    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:35:00.304909    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:35:00.330736    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:35:00.330745    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:35:00.341787    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:35:00.341797    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:35:00.356657    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:35:00.356667    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:35:02.894265    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:35:07.896816    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:35:07.897255    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:35:07.936901    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:35:07.937039    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:35:07.958929    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:35:07.959040    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:35:07.973545    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:35:07.973637    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:35:07.986091    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:35:07.986159    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:35:07.996655    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:35:07.996723    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:35:08.008515    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:35:08.008579    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:35:08.019064    8811 logs.go:276] 0 containers: []
	W0729 03:35:08.019073    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:35:08.019125    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:35:08.029655    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:35:08.029675    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:35:08.029680    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:35:08.044609    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:35:08.044621    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:35:08.059219    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:35:08.059230    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:35:08.071071    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:35:08.071081    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:35:08.106180    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:35:08.106190    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:35:08.129148    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:35:08.129160    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:35:08.153255    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:35:08.153264    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:35:08.165480    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:35:08.165492    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:35:08.176696    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:35:08.176708    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:35:08.191418    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:35:08.191431    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:35:08.209531    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:35:08.209541    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:35:08.222107    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:35:08.222122    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:35:08.226286    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:35:08.226294    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:35:08.239849    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:35:08.239860    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:35:08.260814    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:35:08.260826    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:35:08.276503    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:35:08.276514    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:35:08.288303    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:35:08.288316    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:35:10.829983    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:35:15.832292    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:35:15.832691    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:35:15.873285    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:35:15.873407    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:35:15.894995    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:35:15.895090    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:35:15.910650    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:35:15.910724    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:35:15.923027    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:35:15.923099    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:35:15.933807    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:35:15.933877    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:35:15.944352    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:35:15.944416    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:35:15.954746    8811 logs.go:276] 0 containers: []
	W0729 03:35:15.954757    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:35:15.954815    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:35:15.965551    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:35:15.965571    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:35:15.965576    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:35:16.006208    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:35:16.006216    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:35:16.010548    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:35:16.010555    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:35:16.024066    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:35:16.024077    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:35:16.041764    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:35:16.041775    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:35:16.061683    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:35:16.061696    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:35:16.081825    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:35:16.081838    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:35:16.093365    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:35:16.093375    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:35:16.109106    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:35:16.109119    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:35:16.129727    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:35:16.129741    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:35:16.147863    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:35:16.147875    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:35:16.184786    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:35:16.184801    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:35:16.198626    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:35:16.198636    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:35:16.209954    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:35:16.209966    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:35:16.220645    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:35:16.220659    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:35:16.245709    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:35:16.245720    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:35:16.259911    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:35:16.259924    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:35:18.777387    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:35:23.780097    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:35:23.780500    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:35:23.821092    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:35:23.821236    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:35:23.842566    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:35:23.842674    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:35:23.857657    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:35:23.857740    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:35:23.870370    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:35:23.870438    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:35:23.881453    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:35:23.881524    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:35:23.892374    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:35:23.892447    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:35:23.902674    8811 logs.go:276] 0 containers: []
	W0729 03:35:23.902684    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:35:23.902736    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:35:23.913462    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:35:23.913483    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:35:23.913489    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:35:23.928746    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:35:23.928756    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:35:23.944276    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:35:23.944286    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:35:23.956575    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:35:23.956587    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:35:23.968326    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:35:23.968338    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:35:24.003651    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:35:24.003666    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:35:24.018384    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:35:24.018396    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:35:24.030227    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:35:24.030238    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:35:24.042115    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:35:24.042129    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:35:24.059265    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:35:24.059277    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:35:24.096759    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:35:24.096768    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:35:24.111676    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:35:24.111689    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:35:24.136197    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:35:24.136205    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:35:24.140500    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:35:24.140506    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:35:24.160605    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:35:24.160614    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:35:24.179219    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:35:24.179230    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:35:24.194750    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:35:24.194761    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:35:26.708670    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:35:31.711217    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:35:31.711456    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:35:31.732357    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:35:31.732465    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:35:31.746601    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:35:31.746672    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:35:31.759244    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:35:31.759305    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:35:31.769888    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:35:31.769950    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:35:31.780296    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:35:31.780364    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:35:31.791060    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:35:31.791123    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:35:31.800980    8811 logs.go:276] 0 containers: []
	W0729 03:35:31.800994    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:35:31.801055    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:35:31.811444    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:35:31.811465    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:35:31.811470    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:35:31.848805    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:35:31.848818    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:35:31.863089    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:35:31.863101    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:35:31.876914    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:35:31.876925    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:35:31.893460    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:35:31.893470    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:35:31.909153    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:35:31.909164    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:35:31.920733    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:35:31.920744    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:35:31.958892    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:35:31.958902    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:35:31.972424    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:35:31.972440    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:35:31.997734    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:35:31.997742    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:35:32.001698    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:35:32.001703    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:35:32.013190    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:35:32.013201    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:35:32.024683    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:35:32.024693    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:35:32.048328    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:35:32.048341    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:35:32.059788    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:35:32.059798    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:35:32.077807    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:35:32.077817    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:35:32.088843    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:35:32.088854    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:35:34.602452    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:35:39.604561    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:35:39.604758    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:35:39.616403    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:35:39.616485    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:35:39.627710    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:35:39.627787    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:35:39.638214    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:35:39.638280    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:35:39.649385    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:35:39.649453    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:35:39.662525    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:35:39.662594    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:35:39.677834    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:35:39.677897    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:35:39.688826    8811 logs.go:276] 0 containers: []
	W0729 03:35:39.688840    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:35:39.688898    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:35:39.699765    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:35:39.699781    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:35:39.699788    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:35:39.715142    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:35:39.715153    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:35:39.735068    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:35:39.735079    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:35:39.746890    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:35:39.746901    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:35:39.782737    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:35:39.782747    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:35:39.796959    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:35:39.796973    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:35:39.809161    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:35:39.809176    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:35:39.820656    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:35:39.820667    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:35:39.859315    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:35:39.859324    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:35:39.879301    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:35:39.879315    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:35:39.897391    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:35:39.897401    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:35:39.922504    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:35:39.922514    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:35:39.926792    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:35:39.926800    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:35:39.945253    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:35:39.945263    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:35:39.959577    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:35:39.959588    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:35:39.971156    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:35:39.971166    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:35:39.983184    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:35:39.983195    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:35:42.497376    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:35:47.499575    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:35:47.499808    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:35:47.516173    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:35:47.516253    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:35:47.529312    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:35:47.529389    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:35:47.540030    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:35:47.540091    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:35:47.550651    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:35:47.550725    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:35:47.565152    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:35:47.565213    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:35:47.581715    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:35:47.581774    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:35:47.592098    8811 logs.go:276] 0 containers: []
	W0729 03:35:47.592115    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:35:47.592179    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:35:47.602933    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:35:47.602949    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:35:47.602954    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:35:47.622695    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:35:47.622707    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:35:47.636215    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:35:47.636228    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:35:47.659598    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:35:47.659608    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:35:47.675911    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:35:47.675924    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:35:47.687599    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:35:47.687611    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:35:47.698984    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:35:47.698995    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:35:47.737382    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:35:47.737393    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:35:47.751402    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:35:47.751414    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:35:47.765946    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:35:47.765958    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:35:47.776956    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:35:47.776968    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:35:47.788898    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:35:47.788913    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:35:47.793103    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:35:47.793110    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:35:47.828249    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:35:47.828262    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:35:47.843329    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:35:47.843341    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:35:47.854893    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:35:47.854902    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:35:47.866674    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:35:47.866688    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:35:50.393002    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:35:55.394525    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:35:55.394625    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:35:55.405754    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:35:55.405825    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:35:55.417153    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:35:55.417217    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:35:55.428083    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:35:55.428162    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:35:55.439372    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:35:55.439435    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:35:55.450521    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:35:55.450588    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:35:55.460583    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:35:55.460650    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:35:55.470788    8811 logs.go:276] 0 containers: []
	W0729 03:35:55.470799    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:35:55.470856    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:35:55.481059    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:35:55.481073    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:35:55.481079    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:35:55.493153    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:35:55.493166    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:35:55.505761    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:35:55.505776    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:35:55.522326    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:35:55.522338    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:35:55.538894    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:35:55.538905    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:35:55.551669    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:35:55.551680    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:35:55.591843    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:35:55.591855    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:35:55.609038    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:35:55.609049    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:35:55.633442    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:35:55.633459    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:35:55.638040    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:35:55.638049    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:35:55.653196    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:35:55.653211    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:35:55.674629    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:35:55.674641    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:35:55.694920    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:35:55.694935    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:35:55.707084    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:35:55.707094    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:35:55.746550    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:35:55.746565    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:35:55.772122    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:35:55.772146    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:35:55.783848    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:35:55.783861    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:35:58.298305    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:03.300487    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:03.300595    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:36:03.312189    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:36:03.312267    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:36:03.323886    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:36:03.323960    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:36:03.335347    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:36:03.335425    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:36:03.348008    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:36:03.348093    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:36:03.361592    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:36:03.361665    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:36:03.377172    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:36:03.377251    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:36:03.388928    8811 logs.go:276] 0 containers: []
	W0729 03:36:03.388940    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:36:03.388996    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:36:03.400890    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:36:03.400907    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:36:03.400913    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:36:03.422644    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:36:03.422670    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:36:03.448408    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:36:03.448423    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:36:03.490059    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:36:03.490079    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:36:03.526879    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:36:03.526890    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:36:03.538775    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:36:03.538789    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:36:03.551432    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:36:03.551444    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:36:03.563833    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:36:03.563845    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:36:03.580161    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:36:03.580172    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:36:03.584656    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:36:03.584664    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:36:03.600686    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:36:03.600699    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:36:03.616475    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:36:03.616487    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:36:03.635490    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:36:03.635505    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:36:03.648278    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:36:03.648290    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:36:03.663595    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:36:03.663610    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:36:03.681474    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:36:03.681489    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:36:03.696976    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:36:03.696990    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:36:06.214998    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:11.217199    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:11.217462    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:36:11.240657    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:36:11.240773    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:36:11.256398    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:36:11.256482    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:36:11.272342    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:36:11.272409    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:36:11.283123    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:36:11.283187    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:36:11.293655    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:36:11.293721    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:36:11.304118    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:36:11.304175    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:36:11.314111    8811 logs.go:276] 0 containers: []
	W0729 03:36:11.314123    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:36:11.314182    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:36:11.324376    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:36:11.324395    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:36:11.324401    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:36:11.335533    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:36:11.335544    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:36:11.346722    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:36:11.346731    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:36:11.361343    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:36:11.361356    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:36:11.377356    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:36:11.377367    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:36:11.392523    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:36:11.392533    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:36:11.412169    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:36:11.412178    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:36:11.431644    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:36:11.431654    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:36:11.445799    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:36:11.445813    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:36:11.457168    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:36:11.457181    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:36:11.480544    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:36:11.480552    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:36:11.484527    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:36:11.484534    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:36:11.520402    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:36:11.520415    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:36:11.539789    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:36:11.539801    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:36:11.551685    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:36:11.551694    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:36:11.565466    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:36:11.565480    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:36:11.605049    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:36:11.605057    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:36:14.124294    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:19.125975    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:19.126370    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:36:19.162137    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:36:19.162262    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:36:19.181768    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:36:19.181851    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:36:19.196754    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:36:19.196829    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:36:19.216737    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:36:19.216824    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:36:19.227655    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:36:19.227724    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:36:19.238492    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:36:19.238557    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:36:19.248817    8811 logs.go:276] 0 containers: []
	W0729 03:36:19.248832    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:36:19.248894    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:36:19.259120    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:36:19.259138    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:36:19.259144    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:36:19.274682    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:36:19.274692    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:36:19.286442    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:36:19.286452    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:36:19.298941    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:36:19.298951    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:36:19.333985    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:36:19.334000    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:36:19.346021    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:36:19.346034    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:36:19.364887    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:36:19.364898    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:36:19.384242    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:36:19.384259    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:36:19.395797    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:36:19.395811    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:36:19.410482    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:36:19.410496    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:36:19.428532    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:36:19.428546    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:36:19.444355    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:36:19.444369    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:36:19.481962    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:36:19.481970    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:36:19.502364    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:36:19.502378    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:36:19.520561    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:36:19.520574    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:36:19.543307    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:36:19.543313    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:36:19.547570    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:36:19.547576    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:36:22.063300    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:27.063523    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:27.063787    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:36:27.089765    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:36:27.089886    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:36:27.106922    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:36:27.107007    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:36:27.125012    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:36:27.125073    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:36:27.139711    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:36:27.139784    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:36:27.150022    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:36:27.150089    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:36:27.160603    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:36:27.160674    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:36:27.170724    8811 logs.go:276] 0 containers: []
	W0729 03:36:27.170736    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:36:27.170792    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:36:27.181183    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:36:27.181198    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:36:27.181203    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:36:27.220812    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:36:27.220821    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:36:27.234918    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:36:27.234931    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:36:27.246301    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:36:27.246314    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:36:27.263821    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:36:27.263833    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:36:27.275491    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:36:27.275503    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:36:27.279770    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:36:27.279779    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:36:27.313280    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:36:27.313291    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:36:27.327911    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:36:27.327923    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:36:27.339395    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:36:27.339407    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:36:27.356764    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:36:27.356776    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:36:27.379688    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:36:27.379695    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:36:27.393411    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:36:27.393421    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:36:27.413120    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:36:27.413133    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:36:27.428705    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:36:27.428715    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:36:27.440466    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:36:27.440478    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:36:27.452739    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:36:27.452750    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:36:29.967326    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:34.969591    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:34.970047    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:36:35.040419    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:36:35.040502    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:36:35.072361    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:36:35.072442    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:36:35.089603    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:36:35.089680    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:36:35.100034    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:36:35.100108    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:36:35.109933    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:36:35.109995    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:36:35.120711    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:36:35.120774    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:36:35.130796    8811 logs.go:276] 0 containers: []
	W0729 03:36:35.130807    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:36:35.130862    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:36:35.141254    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:36:35.141273    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:36:35.141278    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:36:35.165912    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:36:35.165926    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:36:35.181831    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:36:35.181843    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:36:35.196920    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:36:35.196933    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:36:35.212369    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:36:35.212380    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:36:35.224718    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:36:35.224731    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:36:35.229656    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:36:35.229663    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:36:35.243194    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:36:35.243208    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:36:35.255450    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:36:35.255462    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:36:35.294672    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:36:35.294681    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:36:35.328954    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:36:35.328966    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:36:35.340416    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:36:35.340427    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:36:35.357876    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:36:35.357887    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:36:35.369260    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:36:35.369270    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:36:35.380016    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:36:35.380025    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:36:35.403893    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:36:35.403907    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:36:35.423295    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:36:35.423304    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:36:37.940022    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:42.942607    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:42.942694    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:36:42.953920    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:36:42.953990    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:36:42.965681    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:36:42.965733    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:36:42.984471    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:36:42.984530    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:36:42.995923    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:36:42.995994    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:36:43.006392    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:36:43.006458    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:36:43.017697    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:36:43.017762    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:36:43.027989    8811 logs.go:276] 0 containers: []
	W0729 03:36:43.028002    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:36:43.028060    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:36:43.043327    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:36:43.043342    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:36:43.043347    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:36:43.047867    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:36:43.047874    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:36:43.065807    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:36:43.065821    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:36:43.087021    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:36:43.087043    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:36:43.107736    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:36:43.107749    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:36:43.124913    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:36:43.124921    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:36:43.137363    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:36:43.137374    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:36:43.178025    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:36:43.178039    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:36:43.191264    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:36:43.191278    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:36:43.203451    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:36:43.203465    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:36:43.222921    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:36:43.222937    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:36:43.248014    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:36:43.248027    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:36:43.285378    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:36:43.285389    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:36:43.299334    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:36:43.299347    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:36:43.311229    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:36:43.311241    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:36:43.327094    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:36:43.327102    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:36:43.339731    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:36:43.339748    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:36:45.852674    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:50.854271    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:50.854336    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:36:50.866094    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:36:50.866171    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:36:50.877506    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:36:50.877576    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:36:50.890156    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:36:50.890224    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:36:50.902791    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:36:50.902866    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:36:50.913293    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:36:50.913361    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:36:50.924473    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:36:50.924545    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:36:50.934930    8811 logs.go:276] 0 containers: []
	W0729 03:36:50.934942    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:36:50.935001    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:36:50.945737    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:36:50.945756    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:36:50.945762    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:36:50.986149    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:36:50.986158    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:36:50.990710    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:36:50.990716    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:36:51.004889    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:36:51.004899    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:36:51.022999    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:36:51.023011    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:36:51.036889    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:36:51.036902    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:36:51.055050    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:36:51.055060    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:36:51.075070    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:36:51.075081    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:36:51.086656    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:36:51.086667    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:36:51.101488    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:36:51.101498    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:36:51.113659    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:36:51.113672    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:36:51.139120    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:36:51.139131    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:36:51.175980    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:36:51.176005    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:36:51.191056    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:36:51.191065    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:36:51.202778    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:36:51.202789    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:36:51.214922    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:36:51.214934    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:36:51.230056    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:36:51.230067    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:36:53.744144    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:58.745012    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:58.745185    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:36:58.760983    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:36:58.761062    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:36:58.773880    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:36:58.773956    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:36:58.783928    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:36:58.783995    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:36:58.794606    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:36:58.794679    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:36:58.805032    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:36:58.805098    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:36:58.815752    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:36:58.815826    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:36:58.826236    8811 logs.go:276] 0 containers: []
	W0729 03:36:58.826247    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:36:58.826310    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:36:58.836736    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:36:58.836753    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:36:58.836759    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:36:58.873382    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:36:58.873394    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:36:58.894561    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:36:58.894572    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:36:58.905367    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:36:58.905379    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:36:58.916930    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:36:58.916942    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:36:58.937894    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:36:58.937904    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:36:58.953605    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:36:58.953615    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:36:58.975624    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:36:58.975636    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:36:58.987471    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:36:58.987488    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:36:58.992429    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:36:58.992435    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:36:59.010033    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:36:59.010044    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:36:59.024473    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:36:59.024488    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:36:59.048202    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:36:59.048217    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:36:59.060303    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:36:59.060314    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:36:59.100284    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:36:59.100295    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:36:59.118282    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:36:59.118293    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:36:59.134038    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:36:59.134047    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:37:01.650552    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:06.652678    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:06.652800    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:37:06.664545    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:37:06.664610    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:37:06.675305    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:37:06.675376    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:37:06.690230    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:37:06.690299    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:37:06.701590    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:37:06.701660    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:37:06.712306    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:37:06.712390    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:37:06.723989    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:37:06.724052    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:37:06.739092    8811 logs.go:276] 0 containers: []
	W0729 03:37:06.739105    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:37:06.739164    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:37:06.749529    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:37:06.749547    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:37:06.749551    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:37:06.764747    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:37:06.764758    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:37:06.782139    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:37:06.782150    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:37:06.796996    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:37:06.797009    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:37:06.832027    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:37:06.832038    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:37:06.851353    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:37:06.851365    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:37:06.890584    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:37:06.890596    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:37:06.925758    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:37:06.925769    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:37:06.940133    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:37:06.940144    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:37:06.953497    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:37:06.953512    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:37:06.973430    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:37:06.973443    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:37:06.985288    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:37:06.985303    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:37:06.997773    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:37:06.997783    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:37:07.002094    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:37:07.002103    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:37:07.016388    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:37:07.016398    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:37:07.028071    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:37:07.028080    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:37:07.053830    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:37:07.053840    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:37:09.579472    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:14.581661    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:14.581770    8811 kubeadm.go:597] duration metric: took 4m4.367033541s to restartPrimaryControlPlane
	W0729 03:37:14.581858    8811 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 03:37:14.581913    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 03:37:15.620871    8811 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.038963791s)
	I0729 03:37:15.620943    8811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 03:37:15.625879    8811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 03:37:15.628686    8811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 03:37:15.631518    8811 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 03:37:15.631524    8811 kubeadm.go:157] found existing configuration files:
	
	I0729 03:37:15.631549    8811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/admin.conf
	I0729 03:37:15.634032    8811 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 03:37:15.634053    8811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 03:37:15.636844    8811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/kubelet.conf
	I0729 03:37:15.639766    8811 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 03:37:15.639787    8811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 03:37:15.642377    8811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/controller-manager.conf
	I0729 03:37:15.644836    8811 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 03:37:15.644853    8811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 03:37:15.647771    8811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/scheduler.conf
	I0729 03:37:15.650126    8811 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 03:37:15.650150    8811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 03:37:15.652773    8811 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 03:37:15.669441    8811 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 03:37:15.669549    8811 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 03:37:15.721234    8811 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 03:37:15.721291    8811 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 03:37:15.721345    8811 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 03:37:15.772008    8811 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 03:37:15.776253    8811 out.go:204]   - Generating certificates and keys ...
	I0729 03:37:15.776289    8811 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 03:37:15.776322    8811 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 03:37:15.776398    8811 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 03:37:15.776587    8811 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 03:37:15.776624    8811 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 03:37:15.776653    8811 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 03:37:15.776696    8811 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 03:37:15.776749    8811 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 03:37:15.776820    8811 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 03:37:15.776901    8811 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 03:37:15.776939    8811 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 03:37:15.776996    8811 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 03:37:15.885146    8811 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 03:37:16.110286    8811 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 03:37:16.267765    8811 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 03:37:16.331278    8811 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 03:37:16.359622    8811 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 03:37:16.360132    8811 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 03:37:16.360196    8811 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 03:37:16.461737    8811 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 03:37:16.464878    8811 out.go:204]   - Booting up control plane ...
	I0729 03:37:16.464949    8811 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 03:37:16.464989    8811 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 03:37:16.465022    8811 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 03:37:16.465074    8811 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 03:37:16.465146    8811 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 03:37:20.965865    8811 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504073 seconds
	I0729 03:37:20.965961    8811 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 03:37:20.972001    8811 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 03:37:21.487266    8811 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 03:37:21.487377    8811 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-376000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 03:37:21.994780    8811 kubeadm.go:310] [bootstrap-token] Using token: wqt6bu.fgmw34p07c6uokt1
	I0729 03:37:21.997500    8811 out.go:204]   - Configuring RBAC rules ...
	I0729 03:37:21.997606    8811 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 03:37:21.998302    8811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 03:37:22.005711    8811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 03:37:22.007354    8811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 03:37:22.008768    8811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 03:37:22.009938    8811 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 03:37:22.014961    8811 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 03:37:22.194268    8811 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 03:37:22.401448    8811 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 03:37:22.402018    8811 kubeadm.go:310] 
	I0729 03:37:22.402054    8811 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 03:37:22.402058    8811 kubeadm.go:310] 
	I0729 03:37:22.402115    8811 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 03:37:22.402145    8811 kubeadm.go:310] 
	I0729 03:37:22.402216    8811 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 03:37:22.402254    8811 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 03:37:22.402282    8811 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 03:37:22.402287    8811 kubeadm.go:310] 
	I0729 03:37:22.402315    8811 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 03:37:22.402317    8811 kubeadm.go:310] 
	I0729 03:37:22.402345    8811 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 03:37:22.402349    8811 kubeadm.go:310] 
	I0729 03:37:22.402395    8811 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 03:37:22.402447    8811 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 03:37:22.402491    8811 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 03:37:22.402493    8811 kubeadm.go:310] 
	I0729 03:37:22.402535    8811 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 03:37:22.402594    8811 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 03:37:22.402602    8811 kubeadm.go:310] 
	I0729 03:37:22.402645    8811 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wqt6bu.fgmw34p07c6uokt1 \
	I0729 03:37:22.402699    8811 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:56da7cbeac47112c1517f3d5f4aec3aafe98daa728e4f5de9707d5d85e63df76 \
	I0729 03:37:22.402715    8811 kubeadm.go:310] 	--control-plane 
	I0729 03:37:22.402718    8811 kubeadm.go:310] 
	I0729 03:37:22.402761    8811 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 03:37:22.402765    8811 kubeadm.go:310] 
	I0729 03:37:22.402815    8811 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wqt6bu.fgmw34p07c6uokt1 \
	I0729 03:37:22.402874    8811 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:56da7cbeac47112c1517f3d5f4aec3aafe98daa728e4f5de9707d5d85e63df76 
	I0729 03:37:22.402934    8811 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 03:37:22.402942    8811 cni.go:84] Creating CNI manager for ""
	I0729 03:37:22.402950    8811 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:37:22.407031    8811 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 03:37:22.410015    8811 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 03:37:22.412897    8811 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 03:37:22.417848    8811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 03:37:22.417894    8811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 03:37:22.417922    8811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-376000 minikube.k8s.io/updated_at=2024_07_29T03_37_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=running-upgrade-376000 minikube.k8s.io/primary=true
	I0729 03:37:22.461426    8811 kubeadm.go:1113] duration metric: took 43.573375ms to wait for elevateKubeSystemPrivileges
	I0729 03:37:22.461439    8811 ops.go:34] apiserver oom_adj: -16
	I0729 03:37:22.461447    8811 kubeadm.go:394] duration metric: took 4m12.260784125s to StartCluster
	I0729 03:37:22.461465    8811 settings.go:142] acquiring lock: {Name:mk5fe4de5daf4f1a01814785384dc93f95ac574d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:37:22.461636    8811 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:37:22.462015    8811 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/kubeconfig: {Name:mk88e6cb321d16f76049e5804261f3b045a9d412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:37:22.462220    8811 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:37:22.462246    8811 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 03:37:22.462286    8811 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-376000"
	I0729 03:37:22.462298    8811 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-376000"
	I0729 03:37:22.462309    8811 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-376000"
	I0729 03:37:22.462299    8811 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-376000"
	W0729 03:37:22.462342    8811 addons.go:243] addon storage-provisioner should already be in state true
	I0729 03:37:22.462316    8811 config.go:182] Loaded profile config "running-upgrade-376000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:37:22.462355    8811 host.go:66] Checking if "running-upgrade-376000" exists ...
	I0729 03:37:22.463160    8811 kapi.go:59] client config for running-upgrade-376000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/client.key", CAFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10615c080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 03:37:22.463279    8811 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-376000"
	W0729 03:37:22.463284    8811 addons.go:243] addon default-storageclass should already be in state true
	I0729 03:37:22.463290    8811 host.go:66] Checking if "running-upgrade-376000" exists ...
	I0729 03:37:22.466016    8811 out.go:177] * Verifying Kubernetes components...
	I0729 03:37:22.466412    8811 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 03:37:22.470178    8811 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 03:37:22.470185    8811 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/running-upgrade-376000/id_rsa Username:docker}
	I0729 03:37:22.473052    8811 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:37:22.475998    8811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:37:22.479066    8811 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 03:37:22.479072    8811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 03:37:22.479077    8811 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/running-upgrade-376000/id_rsa Username:docker}
	I0729 03:37:22.562758    8811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 03:37:22.567674    8811 api_server.go:52] waiting for apiserver process to appear ...
	I0729 03:37:22.567720    8811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:37:22.571544    8811 api_server.go:72] duration metric: took 109.3155ms to wait for apiserver process to appear ...
	I0729 03:37:22.571553    8811 api_server.go:88] waiting for apiserver healthz status ...
	I0729 03:37:22.571559    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:22.609903    8811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 03:37:22.637308    8811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 03:37:27.573638    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:27.573725    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:32.574249    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:32.574271    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:37.574593    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:37.574623    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:42.575091    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:42.575144    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:47.575830    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:47.575880    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:52.576075    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:52.576101    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 03:37:52.935786    8811 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 03:37:52.940127    8811 out.go:177] * Enabled addons: storage-provisioner
	I0729 03:37:52.947083    8811 addons.go:510] duration metric: took 30.485438042s for enable addons: enabled=[storage-provisioner]
	I0729 03:37:57.577025    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:57.577069    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:02.578327    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:02.578377    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:07.579902    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:07.579930    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:12.582011    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:12.582064    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:17.584138    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:17.584178    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:22.586328    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:22.586500    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:22.597232    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:38:22.597301    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:22.607766    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:38:22.607839    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:22.618866    8811 logs.go:276] 2 containers: [feaa048ca969 5d89100d144a]
	I0729 03:38:22.618935    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:22.629250    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:38:22.629315    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:22.639076    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:38:22.639138    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:22.650226    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:38:22.650288    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:22.660284    8811 logs.go:276] 0 containers: []
	W0729 03:38:22.660299    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:22.660354    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:22.671824    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:38:22.671837    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:38:22.671843    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:22.683962    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:22.683973    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:38:22.715402    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:38:22.715501    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:38:22.716874    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:22.716883    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:22.754054    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:38:22.754067    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:38:22.768275    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:38:22.768289    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:38:22.779804    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:38:22.779818    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:38:22.791829    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:38:22.791843    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:38:22.809188    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:22.809202    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:22.834236    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:22.834250    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:22.838552    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:38:22.838561    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:38:22.852466    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:38:22.852475    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:38:22.864053    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:38:22.864063    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:38:22.878436    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:38:22.878446    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:38:22.890931    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:38:22.890940    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:38:22.890970    8811 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 03:38:22.890975    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:38:22.890979    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:38:22.890984    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:38:22.891003    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:38:32.894932    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:37.897086    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:37.897229    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:37.910328    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:38:37.910404    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:37.922394    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:38:37.922460    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:37.932617    8811 logs.go:276] 2 containers: [feaa048ca969 5d89100d144a]
	I0729 03:38:37.932696    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:37.943363    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:38:37.943425    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:37.953551    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:38:37.953613    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:37.964265    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:38:37.964325    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:37.974045    8811 logs.go:276] 0 containers: []
	W0729 03:38:37.974057    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:37.974117    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:37.985080    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:38:37.985093    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:37.985098    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:38:38.016085    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:38:38.016182    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:38:38.017466    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:38.017470    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:38.051172    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:38:38.051183    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:38:38.069018    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:38:38.069027    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:38:38.080480    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:38:38.080490    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:38:38.097820    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:38:38.097831    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:38:38.109372    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:38:38.109382    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:38.120963    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:38.120972    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:38.125471    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:38:38.125480    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:38:38.140602    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:38:38.140614    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:38:38.152032    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:38:38.152042    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:38:38.164563    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:38:38.164572    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:38:38.179603    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:38.179616    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:38.202583    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:38:38.202591    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:38:38.202614    8811 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 03:38:38.202618    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:38:38.202633    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:38:38.202638    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:38:38.202642    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:38:48.205265    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:53.207565    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:53.207765    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:53.229191    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:38:53.229287    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:53.244264    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:38:53.244346    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:53.256987    8811 logs.go:276] 2 containers: [feaa048ca969 5d89100d144a]
	I0729 03:38:53.257063    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:53.267453    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:38:53.267524    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:53.278601    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:38:53.278668    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:53.289163    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:38:53.289231    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:53.299592    8811 logs.go:276] 0 containers: []
	W0729 03:38:53.299604    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:53.299658    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:53.310237    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:38:53.310252    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:38:53.310257    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:38:53.321932    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:38:53.321942    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:38:53.337624    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:38:53.337636    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:38:53.348883    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:53.348896    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:53.373275    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:53.373282    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:53.377570    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:53.377575    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:53.414662    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:38:53.414673    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:38:53.428980    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:38:53.428994    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:38:53.443800    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:38:53.443811    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:38:53.460617    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:38:53.460631    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:53.472344    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:53.472355    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:38:53.504551    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:38:53.504651    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:38:53.505988    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:38:53.505994    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:38:53.520496    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:38:53.520509    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:38:53.535678    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:38:53.535688    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:38:53.535713    8811 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 03:38:53.535719    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:38:53.535724    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:38:53.535728    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:38:53.535731    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:39:03.538388    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:08.540696    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:08.540939    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:08.564201    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:39:08.564292    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:08.579492    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:39:08.579563    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:08.592371    8811 logs.go:276] 2 containers: [feaa048ca969 5d89100d144a]
	I0729 03:39:08.592446    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:08.607244    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:39:08.607314    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:08.623052    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:39:08.623123    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:08.633808    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:39:08.633877    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:08.644023    8811 logs.go:276] 0 containers: []
	W0729 03:39:08.644034    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:08.644091    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:08.655192    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:39:08.655208    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:08.655214    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:39:08.689792    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:08.689890    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:08.691176    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:39:08.691193    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:39:08.705690    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:39:08.705701    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:39:08.719688    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:39:08.719698    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:39:08.737378    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:39:08.737387    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:08.748316    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:39:08.748326    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:39:08.760824    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:39:08.760837    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:39:08.772300    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:08.772310    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:08.797507    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:08.797515    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:08.801687    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:08.801696    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:08.841515    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:39:08.841527    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:39:08.857855    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:39:08.857866    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:39:08.869665    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:39:08.869674    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:39:08.892674    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:08.892684    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:39:08.892710    8811 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 03:39:08.892715    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:08.892719    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:08.892723    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:08.892726    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:39:18.896679    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:23.899011    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:23.899318    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:23.949222    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:39:23.949345    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:23.965777    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:39:23.965862    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:23.978864    8811 logs.go:276] 2 containers: [feaa048ca969 5d89100d144a]
	I0729 03:39:23.978941    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:23.996366    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:39:23.996430    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:24.006927    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:39:24.006999    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:24.017465    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:39:24.017537    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:24.027100    8811 logs.go:276] 0 containers: []
	W0729 03:39:24.027113    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:24.027171    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:24.038319    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:39:24.038334    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:39:24.038341    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:39:24.050280    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:39:24.050292    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:39:24.064843    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:39:24.064854    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:39:24.076357    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:39:24.076369    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:39:24.094811    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:24.094821    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:24.119732    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:24.119741    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:24.154317    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:39:24.154331    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:39:24.168746    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:39:24.168757    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:39:24.182436    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:39:24.182446    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:24.194055    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:39:24.194067    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:39:24.205492    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:24.205503    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:39:24.237144    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:24.237243    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:24.238632    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:24.238640    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:24.243254    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:39:24.243261    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:39:24.254666    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:24.254679    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:39:24.254703    8811 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 03:39:24.254708    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:24.254712    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:24.254717    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:24.254719    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:39:34.258675    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:39.260907    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:39.261052    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:39.272704    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:39:39.272776    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:39.283637    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:39:39.283721    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:39.294761    8811 logs.go:276] 4 containers: [f2e71a487c88 84567be55aaf feaa048ca969 5d89100d144a]
	I0729 03:39:39.294837    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:39.309486    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:39:39.309554    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:39.320369    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:39:39.320440    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:39.331862    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:39:39.331932    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:39.342446    8811 logs.go:276] 0 containers: []
	W0729 03:39:39.342457    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:39.342513    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:39.352809    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:39:39.352824    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:39.352830    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:39.357310    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:39.357318    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:39.425916    8811 logs.go:123] Gathering logs for coredns [f2e71a487c88] ...
	I0729 03:39:39.425929    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e71a487c88"
	I0729 03:39:39.447968    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:39:39.447980    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:39:39.468099    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:39:39.468110    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:39:39.479040    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:39.479050    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:39.506951    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:39:39.506961    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:39:39.521632    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:39:39.521644    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:39:39.536664    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:39:39.536678    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:39:39.555650    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:39:39.555664    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:39:39.567196    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:39:39.567211    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:39:39.581061    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:39:39.581075    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:39.593310    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:39.593319    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:39:39.626767    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:39.626866    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:39.628200    8811 logs.go:123] Gathering logs for coredns [84567be55aaf] ...
	I0729 03:39:39.628208    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84567be55aaf"
	I0729 03:39:39.640222    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:39:39.640233    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:39:39.651363    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:39.651373    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:39:39.651399    8811 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 03:39:39.651405    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:39.651462    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:39.651501    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:39.651505    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:39:49.653769    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:54.656365    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:54.656550    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:54.680775    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:39:54.680869    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:54.694329    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:39:54.694395    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:54.705837    8811 logs.go:276] 4 containers: [f2e71a487c88 84567be55aaf feaa048ca969 5d89100d144a]
	I0729 03:39:54.705909    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:54.716536    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:39:54.716596    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:54.728534    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:39:54.728603    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:54.740259    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:39:54.740321    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:54.754771    8811 logs.go:276] 0 containers: []
	W0729 03:39:54.754783    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:54.754839    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:54.765270    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:39:54.765286    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:39:54.765292    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:39:54.780068    8811 logs.go:123] Gathering logs for coredns [f2e71a487c88] ...
	I0729 03:39:54.780080    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e71a487c88"
	I0729 03:39:54.791774    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:39:54.791786    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:39:54.803678    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:39:54.803688    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:39:54.817825    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:39:54.817839    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:39:54.830081    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:54.830092    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:54.834943    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:54.834949    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:54.859792    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:54.859801    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:54.895802    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:39:54.895813    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:39:54.907675    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:39:54.907686    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:39:54.927513    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:54.927528    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:39:54.959117    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:54.959215    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:54.960512    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:39:54.960518    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:39:54.975401    8811 logs.go:123] Gathering logs for coredns [84567be55aaf] ...
	I0729 03:39:54.975411    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84567be55aaf"
	I0729 03:39:54.987213    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:39:54.987224    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:39:54.999101    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:39:54.999111    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:55.010280    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:55.010290    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:39:55.010318    8811 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 03:39:55.010322    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:55.010326    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:55.010329    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:55.010332    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:40:05.014218    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:10.016392    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:10.016566    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:10.031234    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:40:10.031309    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:10.043274    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:40:10.043337    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:10.054519    8811 logs.go:276] 4 containers: [f2e71a487c88 84567be55aaf feaa048ca969 5d89100d144a]
	I0729 03:40:10.054597    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:10.065033    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:40:10.065104    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:10.075287    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:40:10.075344    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:10.086048    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:40:10.086120    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:10.096348    8811 logs.go:276] 0 containers: []
	W0729 03:40:10.096357    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:10.096407    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:10.106754    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:40:10.106772    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:40:10.106778    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:40:10.118724    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:40:10.118737    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:40:10.135956    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:10.135966    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:10.141169    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:40:10.141179    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:40:10.154952    8811 logs.go:123] Gathering logs for coredns [f2e71a487c88] ...
	I0729 03:40:10.154964    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e71a487c88"
	I0729 03:40:10.166858    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:40:10.166868    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:40:10.178892    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:10.178903    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:40:10.212107    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:10.212206    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:10.213543    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:40:10.213550    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:40:10.227138    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:40:10.227148    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:40:10.259716    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:40:10.259727    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:40:10.272733    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:40:10.272742    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:40:10.288099    8811 logs.go:123] Gathering logs for coredns [84567be55aaf] ...
	I0729 03:40:10.288113    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84567be55aaf"
	I0729 03:40:10.299819    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:10.299829    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:10.324844    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:40:10.324855    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:10.336234    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:10.336246    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:10.372812    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:10.372822    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:40:10.372848    8811 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 03:40:10.372853    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:10.372856    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:10.372860    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:10.372862    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:40:20.376834    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:25.378990    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:25.379167    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:25.391300    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:40:25.391370    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:25.402019    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:40:25.402080    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:25.412907    8811 logs.go:276] 4 containers: [f2e71a487c88 84567be55aaf feaa048ca969 5d89100d144a]
	I0729 03:40:25.412980    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:25.423321    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:40:25.423386    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:25.433929    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:40:25.433992    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:25.445332    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:40:25.445400    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:25.455416    8811 logs.go:276] 0 containers: []
	W0729 03:40:25.455426    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:25.455481    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:25.465471    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:40:25.465490    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:40:25.465495    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:40:25.476928    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:40:25.476939    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:25.489546    8811 logs.go:123] Gathering logs for coredns [f2e71a487c88] ...
	I0729 03:40:25.489557    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e71a487c88"
	I0729 03:40:25.501084    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:40:25.501096    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:40:25.515544    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:40:25.515555    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:40:25.529214    8811 logs.go:123] Gathering logs for coredns [84567be55aaf] ...
	I0729 03:40:25.529224    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84567be55aaf"
	I0729 03:40:25.541713    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:40:25.541727    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:40:25.556437    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:25.556446    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:25.592667    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:40:25.592679    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:40:25.604852    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:40:25.604863    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:40:25.619101    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:25.619111    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:25.624202    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:40:25.624207    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:40:25.636153    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:40:25.636166    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:40:25.654238    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:25.654248    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:25.678866    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:25.678873    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:40:25.711150    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:25.711247    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:25.712540    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:25.712546    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:40:25.712570    8811 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 03:40:25.712574    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:25.712590    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:25.712593    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:25.712596    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:40:35.716556    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:40.718689    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:40.718836    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:40.735277    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:40:40.735354    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:40.746809    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:40:40.746877    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:40.757097    8811 logs.go:276] 4 containers: [f2e71a487c88 84567be55aaf feaa048ca969 5d89100d144a]
	I0729 03:40:40.757167    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:40.767545    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:40:40.767616    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:40.777900    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:40:40.777963    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:40.788555    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:40:40.788618    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:40.799087    8811 logs.go:276] 0 containers: []
	W0729 03:40:40.799098    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:40.799157    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:40.809091    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:40:40.809107    8811 logs.go:123] Gathering logs for coredns [f2e71a487c88] ...
	I0729 03:40:40.809112    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e71a487c88"
	I0729 03:40:40.821556    8811 logs.go:123] Gathering logs for coredns [84567be55aaf] ...
	I0729 03:40:40.821566    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84567be55aaf"
	I0729 03:40:40.833531    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:40:40.833542    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:40:40.845062    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:40:40.845074    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:40:40.856983    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:40:40.856993    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:40:40.868610    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:40:40.868620    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:40:40.882906    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:40.882918    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:40:40.914264    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:40.914362    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:40.915734    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:40.915740    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:40.951928    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:40:40.951939    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:40:40.966028    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:40:40.966037    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:40:40.979958    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:40:40.979969    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:40:40.998065    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:40.998075    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:41.002303    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:40:41.002312    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:40:41.013539    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:41.013548    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:41.037827    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:40:41.037834    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:41.049513    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:41.049525    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:40:41.049551    8811 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 03:40:41.049557    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:41.049560    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:41.049565    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:41.049567    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:40:51.053503    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:56.055061    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:56.055120    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:56.066393    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:40:56.066456    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:56.078365    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:40:56.078425    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:56.089771    8811 logs.go:276] 4 containers: [f2e71a487c88 84567be55aaf feaa048ca969 5d89100d144a]
	I0729 03:40:56.089845    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:56.100909    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:40:56.100980    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:56.112007    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:40:56.112077    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:56.129610    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:40:56.129679    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:56.140981    8811 logs.go:276] 0 containers: []
	W0729 03:40:56.140998    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:56.141059    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:56.152245    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:40:56.152264    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:40:56.152269    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:40:56.167713    8811 logs.go:123] Gathering logs for coredns [f2e71a487c88] ...
	I0729 03:40:56.167728    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e71a487c88"
	I0729 03:40:56.181451    8811 logs.go:123] Gathering logs for coredns [84567be55aaf] ...
	I0729 03:40:56.181463    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84567be55aaf"
	I0729 03:40:56.194227    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:40:56.194241    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:40:56.207502    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:40:56.207514    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:40:56.223740    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:40:56.223749    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:40:56.235942    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:56.235952    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:56.240510    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:40:56.240517    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:40:56.254799    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:40:56.254807    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:40:56.269280    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:40:56.269293    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:40:56.288944    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:56.288958    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:40:56.322253    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:56.322351    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:56.323728    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:56.323734    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:56.359258    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:40:56.359269    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:40:56.371456    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:56.371468    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:56.396670    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:40:56.396676    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:56.408126    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:56.408137    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:40:56.408166    8811 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 03:40:56.408172    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:56.408176    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:56.408182    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:56.408185    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:41:06.412142    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:11.414308    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:11.414416    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:41:11.427140    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:41:11.427214    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:41:11.439230    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:41:11.439309    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:41:11.450259    8811 logs.go:276] 4 containers: [f2e71a487c88 84567be55aaf feaa048ca969 5d89100d144a]
	I0729 03:41:11.450334    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:41:11.461159    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:41:11.461221    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:41:11.471645    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:41:11.471704    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:41:11.481879    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:41:11.481945    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:41:11.491897    8811 logs.go:276] 0 containers: []
	W0729 03:41:11.491910    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:41:11.491968    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:41:11.502845    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:41:11.502861    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:41:11.502866    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:41:11.536869    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:41:11.536969    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:41:11.538307    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:41:11.538313    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:41:11.550338    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:41:11.550349    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:41:11.564939    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:41:11.564949    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:41:11.589245    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:41:11.589271    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:41:11.594008    8811 logs.go:123] Gathering logs for coredns [f2e71a487c88] ...
	I0729 03:41:11.594022    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e71a487c88"
	I0729 03:41:11.608896    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:41:11.608908    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:41:11.621242    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:41:11.621257    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:41:11.633184    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:41:11.633195    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:41:11.646312    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:41:11.646322    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:41:11.660756    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:41:11.660769    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:41:11.674148    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:41:11.674157    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:41:11.692423    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:41:11.692435    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:41:11.727558    8811 logs.go:123] Gathering logs for coredns [84567be55aaf] ...
	I0729 03:41:11.727572    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84567be55aaf"
	I0729 03:41:11.739404    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:41:11.739414    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:41:11.753294    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:41:11.753305    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:41:11.753333    8811 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 03:41:11.753338    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:41:11.753343    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	  Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:41:11.753347    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:41:11.753351    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:41:21.757262    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:26.759443    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:26.763973    8811 out.go:177] 
	W0729 03:41:26.766936    8811 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 03:41:26.766943    8811 out.go:239] * 
	* 
	W0729 03:41:26.767685    8811 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:41:26.779832    8811 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-376000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-29 03:41:26.86826 -0700 PDT m=+1327.276549543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-376000 -n running-upgrade-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-376000 -n running-upgrade-376000: exit status 2 (15.635531708s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-376000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-201000          | force-systemd-flag-201000 | jenkins | v1.33.1 | 29 Jul 24 03:31 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-814000              | force-systemd-env-814000  | jenkins | v1.33.1 | 29 Jul 24 03:31 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-814000           | force-systemd-env-814000  | jenkins | v1.33.1 | 29 Jul 24 03:31 PDT | 29 Jul 24 03:31 PDT |
	| start   | -p docker-flags-761000                | docker-flags-761000       | jenkins | v1.33.1 | 29 Jul 24 03:31 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-201000             | force-systemd-flag-201000 | jenkins | v1.33.1 | 29 Jul 24 03:31 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-201000          | force-systemd-flag-201000 | jenkins | v1.33.1 | 29 Jul 24 03:31 PDT | 29 Jul 24 03:31 PDT |
	| start   | -p cert-expiration-247000             | cert-expiration-247000    | jenkins | v1.33.1 | 29 Jul 24 03:31 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-761000 ssh               | docker-flags-761000       | jenkins | v1.33.1 | 29 Jul 24 03:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-761000 ssh               | docker-flags-761000       | jenkins | v1.33.1 | 29 Jul 24 03:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-761000                | docker-flags-761000       | jenkins | v1.33.1 | 29 Jul 24 03:32 PDT | 29 Jul 24 03:32 PDT |
	| start   | -p cert-options-126000                | cert-options-126000       | jenkins | v1.33.1 | 29 Jul 24 03:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-126000 ssh               | cert-options-126000       | jenkins | v1.33.1 | 29 Jul 24 03:32 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-126000 -- sudo        | cert-options-126000       | jenkins | v1.33.1 | 29 Jul 24 03:32 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-126000                | cert-options-126000       | jenkins | v1.33.1 | 29 Jul 24 03:32 PDT | 29 Jul 24 03:32 PDT |
	| start   | -p running-upgrade-376000             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 03:32 PDT | 29 Jul 24 03:33 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-376000             | running-upgrade-376000    | jenkins | v1.33.1 | 29 Jul 24 03:33 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-247000             | cert-expiration-247000    | jenkins | v1.33.1 | 29 Jul 24 03:35 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-247000             | cert-expiration-247000    | jenkins | v1.33.1 | 29 Jul 24 03:35 PDT | 29 Jul 24 03:35 PDT |
	| start   | -p kubernetes-upgrade-520000          | kubernetes-upgrade-520000 | jenkins | v1.33.1 | 29 Jul 24 03:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-520000          | kubernetes-upgrade-520000 | jenkins | v1.33.1 | 29 Jul 24 03:35 PDT | 29 Jul 24 03:35 PDT |
	| start   | -p kubernetes-upgrade-520000          | kubernetes-upgrade-520000 | jenkins | v1.33.1 | 29 Jul 24 03:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-520000          | kubernetes-upgrade-520000 | jenkins | v1.33.1 | 29 Jul 24 03:35 PDT | 29 Jul 24 03:35 PDT |
	| start   | -p stopped-upgrade-590000             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 03:35 PDT | 29 Jul 24 03:36 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-590000 stop           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 03:36 PDT | 29 Jul 24 03:36 PDT |
	| start   | -p stopped-upgrade-590000             | stopped-upgrade-590000    | jenkins | v1.33.1 | 29 Jul 24 03:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 03:36:22
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 03:36:22.058552    8948 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:36:22.058755    8948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:36:22.058759    8948 out.go:304] Setting ErrFile to fd 2...
	I0729 03:36:22.058763    8948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:36:22.058919    8948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:36:22.060199    8948 out.go:298] Setting JSON to false
	I0729 03:36:22.080524    8948 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5751,"bootTime":1722243631,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:36:22.080591    8948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:36:22.085781    8948 out.go:177] * [stopped-upgrade-590000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:36:22.093804    8948 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:36:22.093844    8948 notify.go:220] Checking for updates...
	I0729 03:36:22.110324    8948 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:36:22.113773    8948 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:36:22.117744    8948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:36:22.120845    8948 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:36:22.123747    8948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:36:22.126986    8948 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:36:22.130759    8948 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 03:36:22.133694    8948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:36:22.137745    8948 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:36:22.143711    8948 start.go:297] selected driver: qemu2
	I0729 03:36:22.143716    8948 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51469 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 03:36:22.143769    8948 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:36:22.146619    8948 cni.go:84] Creating CNI manager for ""
	I0729 03:36:22.146637    8948 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:36:22.146658    8948 start.go:340] cluster config:
	{Name:stopped-upgrade-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51469 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 03:36:22.146709    8948 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:36:19.125975    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:19.126370    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:36:19.162137    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:36:19.162262    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:36:19.181768    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:36:19.181851    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:36:19.196754    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:36:19.196829    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:36:19.216737    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:36:19.216824    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:36:19.227655    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:36:19.227724    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:36:19.238492    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:36:19.238557    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:36:19.248817    8811 logs.go:276] 0 containers: []
	W0729 03:36:19.248832    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:36:19.248894    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:36:19.259120    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:36:19.259138    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:36:19.259144    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:36:19.274682    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:36:19.274692    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:36:19.286442    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:36:19.286452    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:36:19.298941    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:36:19.298951    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:36:19.333985    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:36:19.334000    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:36:19.346021    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:36:19.346034    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:36:19.364887    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:36:19.364898    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:36:19.384242    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:36:19.384259    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:36:19.395797    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:36:19.395811    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:36:19.410482    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:36:19.410496    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:36:19.428532    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:36:19.428546    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:36:19.444355    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:36:19.444369    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:36:19.481962    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:36:19.481970    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:36:19.502364    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:36:19.502378    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:36:19.520561    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:36:19.520574    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:36:19.543307    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:36:19.543313    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:36:19.547570    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:36:19.547576    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:36:22.063300    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:22.154750    8948 out.go:177] * Starting "stopped-upgrade-590000" primary control-plane node in "stopped-upgrade-590000" cluster
	I0729 03:36:22.157762    8948 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 03:36:22.157779    8948 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 03:36:22.157786    8948 cache.go:56] Caching tarball of preloaded images
	I0729 03:36:22.157841    8948 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:36:22.157846    8948 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 03:36:22.157898    8948 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/config.json ...
	I0729 03:36:22.158369    8948 start.go:360] acquireMachinesLock for stopped-upgrade-590000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:36:22.158402    8948 start.go:364] duration metric: took 26.917µs to acquireMachinesLock for "stopped-upgrade-590000"
	I0729 03:36:22.158411    8948 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:36:22.158415    8948 fix.go:54] fixHost starting: 
	I0729 03:36:22.158518    8948 fix.go:112] recreateIfNeeded on stopped-upgrade-590000: state=Stopped err=<nil>
	W0729 03:36:22.158526    8948 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:36:22.162725    8948 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-590000" ...
	I0729 03:36:22.170750    8948 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:36:22.170807    8948 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51434-:22,hostfwd=tcp::51435-:2376,hostname=stopped-upgrade-590000 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/disk.qcow2
	I0729 03:36:22.219151    8948 main.go:141] libmachine: STDOUT: 
	I0729 03:36:22.219182    8948 main.go:141] libmachine: STDERR: 
	I0729 03:36:22.219189    8948 main.go:141] libmachine: Waiting for VM to start (ssh -p 51434 docker@127.0.0.1)...
	I0729 03:36:27.063523    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:27.063787    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:36:27.089765    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:36:27.089886    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:36:27.106922    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:36:27.107007    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:36:27.125012    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:36:27.125073    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:36:27.139711    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:36:27.139784    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:36:27.150022    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:36:27.150089    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:36:27.160603    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:36:27.160674    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:36:27.170724    8811 logs.go:276] 0 containers: []
	W0729 03:36:27.170736    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:36:27.170792    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:36:27.181183    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:36:27.181198    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:36:27.181203    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:36:27.220812    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:36:27.220821    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:36:27.234918    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:36:27.234931    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:36:27.246301    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:36:27.246314    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:36:27.263821    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:36:27.263833    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:36:27.275491    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:36:27.275503    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:36:27.279770    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:36:27.279779    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:36:27.313280    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:36:27.313291    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:36:27.327911    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:36:27.327923    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:36:27.339395    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:36:27.339407    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:36:27.356764    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:36:27.356776    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:36:27.379688    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:36:27.379695    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:36:27.393411    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:36:27.393421    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:36:27.413120    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:36:27.413133    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:36:27.428705    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:36:27.428715    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:36:27.440466    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:36:27.440478    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:36:27.452739    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:36:27.452750    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:36:29.967326    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:34.969591    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:34.970047    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:36:35.040419    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:36:35.040502    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:36:35.072361    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:36:35.072442    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:36:35.089603    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:36:35.089680    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:36:35.100034    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:36:35.100108    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:36:35.109933    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:36:35.109995    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:36:35.120711    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:36:35.120774    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:36:35.130796    8811 logs.go:276] 0 containers: []
	W0729 03:36:35.130807    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:36:35.130862    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:36:35.141254    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:36:35.141273    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:36:35.141278    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:36:35.165912    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:36:35.165926    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:36:35.181831    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:36:35.181843    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:36:35.196920    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:36:35.196933    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:36:35.212369    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:36:35.212380    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:36:35.224718    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:36:35.224731    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:36:35.229656    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:36:35.229663    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:36:35.243194    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:36:35.243208    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:36:35.255450    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:36:35.255462    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:36:35.294672    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:36:35.294681    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:36:35.328954    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:36:35.328966    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:36:35.340416    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:36:35.340427    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:36:35.357876    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:36:35.357887    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:36:35.369260    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:36:35.369270    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:36:35.380016    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:36:35.380025    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:36:35.403893    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:36:35.403907    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:36:35.423295    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:36:35.423304    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:36:42.040781    8948 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/config.json ...
	I0729 03:36:42.041132    8948 machine.go:94] provisionDockerMachine start ...
	I0729 03:36:42.041216    8948 main.go:141] libmachine: Using SSH client type: native
	I0729 03:36:42.041457    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1027caa10] 0x1027cd270 <nil>  [] 0s} localhost 51434 <nil> <nil>}
	I0729 03:36:42.041464    8948 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 03:36:37.940022    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:42.113969    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 03:36:42.113998    8948 buildroot.go:166] provisioning hostname "stopped-upgrade-590000"
	I0729 03:36:42.114088    8948 main.go:141] libmachine: Using SSH client type: native
	I0729 03:36:42.114273    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1027caa10] 0x1027cd270 <nil>  [] 0s} localhost 51434 <nil> <nil>}
	I0729 03:36:42.114283    8948 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-590000 && echo "stopped-upgrade-590000" | sudo tee /etc/hostname
	I0729 03:36:42.181715    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-590000
	
	I0729 03:36:42.181770    8948 main.go:141] libmachine: Using SSH client type: native
	I0729 03:36:42.181887    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1027caa10] 0x1027cd270 <nil>  [] 0s} localhost 51434 <nil> <nil>}
	I0729 03:36:42.181895    8948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-590000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-590000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-590000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 03:36:42.243490    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 03:36:42.243506    8948 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19337-6349/.minikube CaCertPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19337-6349/.minikube}
	I0729 03:36:42.243515    8948 buildroot.go:174] setting up certificates
	I0729 03:36:42.243520    8948 provision.go:84] configureAuth start
	I0729 03:36:42.243526    8948 provision.go:143] copyHostCerts
	I0729 03:36:42.243611    8948 exec_runner.go:144] found /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.pem, removing ...
	I0729 03:36:42.243618    8948 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.pem
	I0729 03:36:42.243848    8948 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.pem (1082 bytes)
	I0729 03:36:42.244057    8948 exec_runner.go:144] found /Users/jenkins/minikube-integration/19337-6349/.minikube/cert.pem, removing ...
	I0729 03:36:42.244061    8948 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19337-6349/.minikube/cert.pem
	I0729 03:36:42.244127    8948 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19337-6349/.minikube/cert.pem (1123 bytes)
	I0729 03:36:42.244243    8948 exec_runner.go:144] found /Users/jenkins/minikube-integration/19337-6349/.minikube/key.pem, removing ...
	I0729 03:36:42.244246    8948 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19337-6349/.minikube/key.pem
	I0729 03:36:42.244297    8948 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19337-6349/.minikube/key.pem (1679 bytes)
	I0729 03:36:42.244379    8948 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-590000 san=[127.0.0.1 localhost minikube stopped-upgrade-590000]
	I0729 03:36:42.395932    8948 provision.go:177] copyRemoteCerts
	I0729 03:36:42.395978    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 03:36:42.395987    8948 sshutil.go:53] new ssh client: &{IP:localhost Port:51434 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/id_rsa Username:docker}
	I0729 03:36:42.431028    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 03:36:42.437578    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 03:36:42.444119    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 03:36:42.451416    8948 provision.go:87] duration metric: took 207.895459ms to configureAuth
	I0729 03:36:42.451426    8948 buildroot.go:189] setting minikube options for container-runtime
	I0729 03:36:42.451544    8948 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:36:42.451582    8948 main.go:141] libmachine: Using SSH client type: native
	I0729 03:36:42.451673    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1027caa10] 0x1027cd270 <nil>  [] 0s} localhost 51434 <nil> <nil>}
	I0729 03:36:42.451678    8948 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 03:36:42.508671    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 03:36:42.508681    8948 buildroot.go:70] root file system type: tmpfs
	I0729 03:36:42.508735    8948 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 03:36:42.508798    8948 main.go:141] libmachine: Using SSH client type: native
	I0729 03:36:42.508919    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1027caa10] 0x1027cd270 <nil>  [] 0s} localhost 51434 <nil> <nil>}
	I0729 03:36:42.508956    8948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 03:36:42.573358    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 03:36:42.573416    8948 main.go:141] libmachine: Using SSH client type: native
	I0729 03:36:42.573542    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1027caa10] 0x1027cd270 <nil>  [] 0s} localhost 51434 <nil> <nil>}
	I0729 03:36:42.573554    8948 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 03:36:42.933619    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 03:36:42.933634    8948 machine.go:97] duration metric: took 892.512917ms to provisionDockerMachine
	I0729 03:36:42.933640    8948 start.go:293] postStartSetup for "stopped-upgrade-590000" (driver="qemu2")
	I0729 03:36:42.933646    8948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 03:36:42.933701    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 03:36:42.933709    8948 sshutil.go:53] new ssh client: &{IP:localhost Port:51434 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/id_rsa Username:docker}
	I0729 03:36:42.964116    8948 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 03:36:42.965875    8948 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 03:36:42.965884    8948 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19337-6349/.minikube/addons for local assets ...
	I0729 03:36:42.965968    8948 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19337-6349/.minikube/files for local assets ...
	I0729 03:36:42.966115    8948 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19337-6349/.minikube/files/etc/ssl/certs/68432.pem -> 68432.pem in /etc/ssl/certs
	I0729 03:36:42.966242    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 03:36:42.969062    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/files/etc/ssl/certs/68432.pem --> /etc/ssl/certs/68432.pem (1708 bytes)
	I0729 03:36:42.976230    8948 start.go:296] duration metric: took 42.583542ms for postStartSetup
	I0729 03:36:42.976250    8948 fix.go:56] duration metric: took 20.818237709s for fixHost
	I0729 03:36:42.976302    8948 main.go:141] libmachine: Using SSH client type: native
	I0729 03:36:42.976427    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1027caa10] 0x1027cd270 <nil>  [] 0s} localhost 51434 <nil> <nil>}
	I0729 03:36:42.976433    8948 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 03:36:43.037656    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722249403.057969212
	
	I0729 03:36:43.037667    8948 fix.go:216] guest clock: 1722249403.057969212
	I0729 03:36:43.037671    8948 fix.go:229] Guest: 2024-07-29 03:36:43.057969212 -0700 PDT Remote: 2024-07-29 03:36:42.976252 -0700 PDT m=+20.952243084 (delta=81.717212ms)
	I0729 03:36:43.037688    8948 fix.go:200] guest clock delta is within tolerance: 81.717212ms
	I0729 03:36:43.037691    8948 start.go:83] releasing machines lock for "stopped-upgrade-590000", held for 20.879690375s
	I0729 03:36:43.037759    8948 ssh_runner.go:195] Run: cat /version.json
	I0729 03:36:43.037769    8948 sshutil.go:53] new ssh client: &{IP:localhost Port:51434 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/id_rsa Username:docker}
	I0729 03:36:43.037802    8948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 03:36:43.037841    8948 sshutil.go:53] new ssh client: &{IP:localhost Port:51434 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/id_rsa Username:docker}
	W0729 03:36:43.038471    8948 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51434: connect: connection refused
	I0729 03:36:43.038495    8948 retry.go:31] will retry after 312.978852ms: dial tcp [::1]:51434: connect: connection refused
	W0729 03:36:43.068083    8948 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 03:36:43.068163    8948 ssh_runner.go:195] Run: systemctl --version
	I0729 03:36:43.070279    8948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 03:36:43.072075    8948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 03:36:43.072113    8948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 03:36:43.075472    8948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 03:36:43.080900    8948 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 03:36:43.080913    8948 start.go:495] detecting cgroup driver to use...
	I0729 03:36:43.081004    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 03:36:43.090708    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 03:36:43.094045    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 03:36:43.097147    8948 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 03:36:43.097191    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 03:36:43.100421    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 03:36:43.103915    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 03:36:43.107468    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 03:36:43.111000    8948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 03:36:43.114425    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 03:36:43.117596    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 03:36:43.120633    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 03:36:43.123821    8948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 03:36:43.127108    8948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 03:36:43.129932    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:36:43.217774    8948 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 03:36:43.224017    8948 start.go:495] detecting cgroup driver to use...
	I0729 03:36:43.224102    8948 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 03:36:43.230784    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 03:36:43.236316    8948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 03:36:43.245313    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 03:36:43.250195    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 03:36:43.254995    8948 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 03:36:43.312016    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 03:36:43.317530    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 03:36:43.323887    8948 ssh_runner.go:195] Run: which cri-dockerd
	I0729 03:36:43.325566    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 03:36:43.328633    8948 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 03:36:43.334020    8948 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 03:36:43.412838    8948 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 03:36:43.499322    8948 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 03:36:43.499390    8948 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 03:36:43.506173    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:36:43.583853    8948 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 03:36:44.746553    8948 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162706833s)
	I0729 03:36:44.746612    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 03:36:44.752824    8948 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 03:36:44.759906    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 03:36:44.764475    8948 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 03:36:44.845113    8948 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 03:36:44.931817    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:36:45.000491    8948 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 03:36:45.006437    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 03:36:45.010781    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:36:45.081592    8948 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 03:36:45.123263    8948 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 03:36:45.123341    8948 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 03:36:45.125664    8948 start.go:563] Will wait 60s for crictl version
	I0729 03:36:45.125699    8948 ssh_runner.go:195] Run: which crictl
	I0729 03:36:45.126995    8948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 03:36:45.141295    8948 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 03:36:45.141382    8948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 03:36:45.157246    8948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 03:36:45.176981    8948 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 03:36:45.177045    8948 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 03:36:45.178256    8948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 03:36:45.181666    8948 kubeadm.go:883] updating cluster {Name:stopped-upgrade-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51469 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 03:36:45.181715    8948 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 03:36:45.181756    8948 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 03:36:45.192184    8948 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 03:36:45.192193    8948 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 03:36:45.192239    8948 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 03:36:45.195704    8948 ssh_runner.go:195] Run: which lz4
	I0729 03:36:45.197035    8948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 03:36:45.198313    8948 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 03:36:45.198322    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 03:36:46.140437    8948 docker.go:649] duration metric: took 943.445875ms to copy over tarball
	I0729 03:36:46.140508    8948 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 03:36:42.942607    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:42.942694    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:36:42.953920    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:36:42.953990    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:36:42.965681    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:36:42.965733    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:36:42.984471    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:36:42.984530    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:36:42.995923    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:36:42.995994    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:36:43.006392    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:36:43.006458    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:36:43.017697    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:36:43.017762    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:36:43.027989    8811 logs.go:276] 0 containers: []
	W0729 03:36:43.028002    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:36:43.028060    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:36:43.043327    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:36:43.043342    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:36:43.043347    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:36:43.047867    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:36:43.047874    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:36:43.065807    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:36:43.065821    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:36:43.087021    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:36:43.087043    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:36:43.107736    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:36:43.107749    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:36:43.124913    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:36:43.124921    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:36:43.137363    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:36:43.137374    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:36:43.178025    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:36:43.178039    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:36:43.191264    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:36:43.191278    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:36:43.203451    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:36:43.203465    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:36:43.222921    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:36:43.222937    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:36:43.248014    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:36:43.248027    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:36:43.285378    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:36:43.285389    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:36:43.299334    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:36:43.299347    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:36:43.311229    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:36:43.311241    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:36:43.327094    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:36:43.327102    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:36:43.339731    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:36:43.339748    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:36:45.852674    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:47.346137    8948 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.205638s)
	I0729 03:36:47.346151    8948 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 03:36:47.361540    8948 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 03:36:47.364520    8948 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 03:36:47.369926    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:36:47.444539    8948 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 03:36:49.089792    8948 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.645269708s)
	I0729 03:36:49.089899    8948 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 03:36:49.101194    8948 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 03:36:49.101205    8948 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 03:36:49.101210    8948 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 03:36:49.105268    8948 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:36:49.107092    8948 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 03:36:49.108951    8948 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:36:49.109497    8948 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 03:36:49.111811    8948 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 03:36:49.111809    8948 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 03:36:49.113636    8948 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 03:36:49.113672    8948 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 03:36:49.115090    8948 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:36:49.115205    8948 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 03:36:49.124193    8948 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 03:36:49.124211    8948 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 03:36:49.125907    8948 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:36:49.125943    8948 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 03:36:49.126954    8948 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 03:36:49.127841    8948 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 03:36:49.528827    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 03:36:49.529449    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 03:36:49.537710    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 03:36:49.537860    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 03:36:49.540887    8948 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 03:36:49.540916    8948 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 03:36:49.540963    8948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 03:36:49.544521    8948 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 03:36:49.544541    8948 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 03:36:49.544577    8948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 03:36:49.562666    8948 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 03:36:49.562685    8948 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 03:36:49.562721    8948 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 03:36:49.562731    8948 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 03:36:49.562742    8948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 03:36:49.562760    8948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 03:36:49.565449    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 03:36:49.565950    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0729 03:36:49.568102    8948 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 03:36:49.568220    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:36:49.572369    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 03:36:49.587450    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 03:36:49.587464    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 03:36:49.587533    8948 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 03:36:49.587546    8948 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 03:36:49.587587    8948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 03:36:49.592380    8948 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 03:36:49.592410    8948 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:36:49.592464    8948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:36:49.593871    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 03:36:49.601572    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 03:36:49.601699    8948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0729 03:36:49.610395    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 03:36:49.610419    8948 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 03:36:49.610436    8948 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 03:36:49.610477    8948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 03:36:49.610511    8948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0729 03:36:49.612630    8948 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 03:36:49.612646    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 03:36:49.621251    8948 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 03:36:49.621263    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 03:36:49.629938    8948 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 03:36:49.629964    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 03:36:49.629967    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 03:36:49.630063    8948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 03:36:49.659589    8948 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0729 03:36:49.659598    8948 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 03:36:49.659628    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0729 03:36:49.717000    8948 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 03:36:49.717013    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0729 03:36:49.750310    8948 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 03:36:49.750416    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:36:49.799177    8948 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 03:36:49.799196    8948 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 03:36:49.799222    8948 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:36:49.799287    8948 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:36:49.829743    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 03:36:49.829868    8948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 03:36:49.843092    8948 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 03:36:49.843122    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 03:36:49.902265    8948 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 03:36:49.902279    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 03:36:50.254319    8948 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 03:36:50.254343    8948 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 03:36:50.254352    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0729 03:36:50.407046    8948 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 03:36:50.407095    8948 cache_images.go:92] duration metric: took 1.305902292s to LoadCachedImages
	W0729 03:36:50.407156    8948 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 03:36:50.407165    8948 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 03:36:50.407221    8948 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-590000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 03:36:50.407310    8948 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 03:36:50.421363    8948 cni.go:84] Creating CNI manager for ""
	I0729 03:36:50.421375    8948 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:36:50.421380    8948 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 03:36:50.421388    8948 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-590000 NodeName:stopped-upgrade-590000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 03:36:50.421451    8948 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-590000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 03:36:50.421515    8948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 03:36:50.424526    8948 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 03:36:50.424568    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 03:36:50.427706    8948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 03:36:50.432777    8948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 03:36:50.437710    8948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 03:36:50.442714    8948 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 03:36:50.443979    8948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 03:36:50.447556    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:36:50.521752    8948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 03:36:50.530809    8948 certs.go:68] Setting up /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000 for IP: 10.0.2.15
	I0729 03:36:50.530817    8948 certs.go:194] generating shared ca certs ...
	I0729 03:36:50.530827    8948 certs.go:226] acquiring lock for ca certs: {Name:mk5485201dd0b8c49ea299ac713a7956ec13f382 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:36:50.531004    8948 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.key
	I0729 03:36:50.531054    8948 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/proxy-client-ca.key
	I0729 03:36:50.531059    8948 certs.go:256] generating profile certs ...
	I0729 03:36:50.531130    8948 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/client.key
	I0729 03:36:50.531149    8948 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.key.84b808fb
	I0729 03:36:50.531159    8948 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.crt.84b808fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 03:36:50.652465    8948 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.crt.84b808fb ...
	I0729 03:36:50.652480    8948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.crt.84b808fb: {Name:mkba9908a3833f05a0fd05760f672abad4b9cc55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:36:50.652758    8948 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.key.84b808fb ...
	I0729 03:36:50.652763    8948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.key.84b808fb: {Name:mk9759d71abedb9e6737f26ae1e02520ea933ac2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:36:50.652903    8948 certs.go:381] copying /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.crt.84b808fb -> /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.crt
	I0729 03:36:50.653036    8948 certs.go:385] copying /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.key.84b808fb -> /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.key
	I0729 03:36:50.653195    8948 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/proxy-client.key
	I0729 03:36:50.653323    8948 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/6843.pem (1338 bytes)
	W0729 03:36:50.653353    8948 certs.go:480] ignoring /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/6843_empty.pem, impossibly tiny 0 bytes
	I0729 03:36:50.653358    8948 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 03:36:50.653384    8948 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem (1082 bytes)
	I0729 03:36:50.653411    8948 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem (1123 bytes)
	I0729 03:36:50.653439    8948 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/key.pem (1679 bytes)
	I0729 03:36:50.653490    8948 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/files/etc/ssl/certs/68432.pem (1708 bytes)
	I0729 03:36:50.653816    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 03:36:50.660869    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 03:36:50.667915    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 03:36:50.675484    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 03:36:50.683045    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 03:36:50.690222    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 03:36:50.697140    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 03:36:50.703974    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 03:36:50.711427    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/6843.pem --> /usr/share/ca-certificates/6843.pem (1338 bytes)
	I0729 03:36:50.718027    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/files/etc/ssl/certs/68432.pem --> /usr/share/ca-certificates/68432.pem (1708 bytes)
	I0729 03:36:50.724592    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 03:36:50.731638    8948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 03:36:50.736735    8948 ssh_runner.go:195] Run: openssl version
	I0729 03:36:50.738599    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6843.pem && ln -fs /usr/share/ca-certificates/6843.pem /etc/ssl/certs/6843.pem"
	I0729 03:36:50.741509    8948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6843.pem
	I0729 03:36:50.742831    8948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:20 /usr/share/ca-certificates/6843.pem
	I0729 03:36:50.742851    8948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6843.pem
	I0729 03:36:50.744639    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6843.pem /etc/ssl/certs/51391683.0"
	I0729 03:36:50.747858    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68432.pem && ln -fs /usr/share/ca-certificates/68432.pem /etc/ssl/certs/68432.pem"
	I0729 03:36:50.751200    8948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/68432.pem
	I0729 03:36:50.752702    8948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:20 /usr/share/ca-certificates/68432.pem
	I0729 03:36:50.752723    8948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68432.pem
	I0729 03:36:50.754496    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68432.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 03:36:50.757315    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 03:36:50.760260    8948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 03:36:50.761754    8948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0729 03:36:50.761769    8948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 03:36:50.763483    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 03:36:50.766503    8948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 03:36:50.767956    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 03:36:50.770550    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 03:36:50.772512    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 03:36:50.774750    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 03:36:50.776660    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 03:36:50.778487    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 03:36:50.780334    8948 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51469 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 03:36:50.780418    8948 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 03:36:50.790766    8948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 03:36:50.793798    8948 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 03:36:50.793803    8948 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 03:36:50.793826    8948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 03:36:50.796631    8948 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 03:36:50.796926    8948 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-590000" does not appear in /Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:36:50.797038    8948 kubeconfig.go:62] /Users/jenkins/minikube-integration/19337-6349/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-590000" cluster setting kubeconfig missing "stopped-upgrade-590000" context setting]
	I0729 03:36:50.797229    8948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/kubeconfig: {Name:mk88e6cb321d16f76049e5804261f3b045a9d412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:36:50.797621    8948 kapi.go:59] client config for stopped-upgrade-590000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/client.key", CAFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b60080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 03:36:50.797914    8948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 03:36:50.800589    8948 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-590000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 03:36:50.800597    8948 kubeadm.go:1160] stopping kube-system containers ...
	I0729 03:36:50.800631    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 03:36:50.811291    8948 docker.go:483] Stopping containers: [5ec83535d1f0 0c6f4763c087 6c9e82fc6ad9 2ed58f54ac75 15a008cb819a 5ca831426e6a a5ca2a3a4957 5b0322cd745f]
	I0729 03:36:50.811360    8948 ssh_runner.go:195] Run: docker stop 5ec83535d1f0 0c6f4763c087 6c9e82fc6ad9 2ed58f54ac75 15a008cb819a 5ca831426e6a a5ca2a3a4957 5b0322cd745f
	I0729 03:36:50.821767    8948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 03:36:50.827282    8948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 03:36:50.829980    8948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 03:36:50.829985    8948 kubeadm.go:157] found existing configuration files:
	
	I0729 03:36:50.830009    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/admin.conf
	I0729 03:36:50.832397    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 03:36:50.832417    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 03:36:50.835310    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/kubelet.conf
	I0729 03:36:50.838117    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 03:36:50.838135    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 03:36:50.840570    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/controller-manager.conf
	I0729 03:36:50.843389    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 03:36:50.843414    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 03:36:50.846018    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/scheduler.conf
	I0729 03:36:50.848398    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 03:36:50.848419    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 03:36:50.851351    8948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 03:36:50.854141    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:36:50.876439    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:36:51.584068    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:36:51.714622    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:36:51.738480    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:36:51.760997    8948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 03:36:51.761076    8948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:36:50.854271    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:50.854336    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:36:50.866094    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:36:50.866171    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:36:50.877506    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:36:50.877576    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:36:50.890156    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:36:50.890224    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:36:50.902791    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:36:50.902866    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:36:50.913293    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:36:50.913361    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:36:50.924473    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:36:50.924545    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:36:50.934930    8811 logs.go:276] 0 containers: []
	W0729 03:36:50.934942    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:36:50.935001    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:36:50.945737    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:36:50.945756    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:36:50.945762    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:36:50.986149    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:36:50.986158    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:36:50.990710    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:36:50.990716    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:36:51.004889    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:36:51.004899    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:36:51.022999    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:36:51.023011    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:36:51.036889    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:36:51.036902    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:36:51.055050    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:36:51.055060    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:36:51.075070    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:36:51.075081    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:36:51.086656    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:36:51.086667    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:36:51.101488    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:36:51.101498    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:36:51.113659    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:36:51.113672    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:36:51.139120    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:36:51.139131    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:36:51.175980    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:36:51.176005    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:36:51.191056    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:36:51.191065    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:36:51.202778    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:36:51.202789    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:36:51.214922    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:36:51.214934    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:36:51.230056    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:36:51.230067    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:36:52.262424    8948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:36:52.762926    8948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:36:52.767257    8948 api_server.go:72] duration metric: took 1.006281125s to wait for apiserver process to appear ...
	I0729 03:36:52.767268    8948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 03:36:52.767276    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:53.744144    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:57.769286    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:57.769342    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:58.745012    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:58.745185    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:36:58.760983    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:36:58.761062    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:36:58.773880    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:36:58.773956    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:36:58.783928    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:36:58.783995    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:36:58.794606    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:36:58.794679    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:36:58.805032    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:36:58.805098    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:36:58.815752    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:36:58.815826    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:36:58.826236    8811 logs.go:276] 0 containers: []
	W0729 03:36:58.826247    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:36:58.826310    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:36:58.836736    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:36:58.836753    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:36:58.836759    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:36:58.873382    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:36:58.873394    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:36:58.894561    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:36:58.894572    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:36:58.905367    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:36:58.905379    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:36:58.916930    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:36:58.916942    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:36:58.937894    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:36:58.937904    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:36:58.953605    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:36:58.953615    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:36:58.975624    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:36:58.975636    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:36:58.987471    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:36:58.987488    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:36:58.992429    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:36:58.992435    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:36:59.010033    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:36:59.010044    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:36:59.024473    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:36:59.024488    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:36:59.048202    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:36:59.048217    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:36:59.060303    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:36:59.060314    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:36:59.100284    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:36:59.100295    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:36:59.118282    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:36:59.118293    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:36:59.134038    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:36:59.134047    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:37:01.650552    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:02.769651    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:02.769706    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:06.652678    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:06.652800    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:37:06.664545    8811 logs.go:276] 2 containers: [bf07931eab79 86242cc8dea1]
	I0729 03:37:06.664610    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:37:06.675305    8811 logs.go:276] 2 containers: [71b4ba4fb8fb 228f0e7d954c]
	I0729 03:37:06.675376    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:37:06.690230    8811 logs.go:276] 1 containers: [4eb8bb55c33b]
	I0729 03:37:06.690299    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:37:06.701590    8811 logs.go:276] 2 containers: [fc9c6a5c3709 c706c2efe503]
	I0729 03:37:06.701660    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:37:06.712306    8811 logs.go:276] 1 containers: [02fbf8081e77]
	I0729 03:37:06.712390    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:37:06.723989    8811 logs.go:276] 2 containers: [cb019a1e7ed2 2ea8d8b5030a]
	I0729 03:37:06.724052    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:37:06.739092    8811 logs.go:276] 0 containers: []
	W0729 03:37:06.739105    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:37:06.739164    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:37:06.749529    8811 logs.go:276] 2 containers: [ebe7d25c0855 7d339eef52dc]
	I0729 03:37:06.749547    8811 logs.go:123] Gathering logs for kube-scheduler [c706c2efe503] ...
	I0729 03:37:06.749551    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c706c2efe503"
	I0729 03:37:06.764747    8811 logs.go:123] Gathering logs for kube-controller-manager [cb019a1e7ed2] ...
	I0729 03:37:06.764758    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb019a1e7ed2"
	I0729 03:37:06.782139    8811 logs.go:123] Gathering logs for kube-controller-manager [2ea8d8b5030a] ...
	I0729 03:37:06.782150    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea8d8b5030a"
	I0729 03:37:06.796996    8811 logs.go:123] Gathering logs for storage-provisioner [ebe7d25c0855] ...
	I0729 03:37:06.797009    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe7d25c0855"
	I0729 03:37:06.832027    8811 logs.go:123] Gathering logs for storage-provisioner [7d339eef52dc] ...
	I0729 03:37:06.832038    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d339eef52dc"
	I0729 03:37:06.851353    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:37:06.851365    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:37:06.890584    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:37:06.890596    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:37:06.925758    8811 logs.go:123] Gathering logs for etcd [228f0e7d954c] ...
	I0729 03:37:06.925769    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 228f0e7d954c"
	I0729 03:37:06.940133    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:37:06.940144    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:37:06.953497    8811 logs.go:123] Gathering logs for kube-apiserver [86242cc8dea1] ...
	I0729 03:37:06.953512    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86242cc8dea1"
	I0729 03:37:06.973430    8811 logs.go:123] Gathering logs for coredns [4eb8bb55c33b] ...
	I0729 03:37:06.973443    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb8bb55c33b"
	I0729 03:37:06.985288    8811 logs.go:123] Gathering logs for kube-scheduler [fc9c6a5c3709] ...
	I0729 03:37:06.985303    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc9c6a5c3709"
	I0729 03:37:06.997773    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:37:06.997783    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:37:07.002094    8811 logs.go:123] Gathering logs for etcd [71b4ba4fb8fb] ...
	I0729 03:37:07.002103    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71b4ba4fb8fb"
	I0729 03:37:07.016388    8811 logs.go:123] Gathering logs for kube-proxy [02fbf8081e77] ...
	I0729 03:37:07.016398    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02fbf8081e77"
	I0729 03:37:07.028071    8811 logs.go:123] Gathering logs for kube-apiserver [bf07931eab79] ...
	I0729 03:37:07.028080    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf07931eab79"
	I0729 03:37:07.053830    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:37:07.053840    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:37:07.770081    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:07.770118    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:09.579472    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:14.581661    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:14.581770    8811 kubeadm.go:597] duration metric: took 4m4.367033541s to restartPrimaryControlPlane
	W0729 03:37:14.581858    8811 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 03:37:14.581913    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 03:37:15.620871    8811 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.038963791s)
	I0729 03:37:15.620943    8811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 03:37:15.625879    8811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 03:37:15.628686    8811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 03:37:15.631518    8811 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 03:37:15.631524    8811 kubeadm.go:157] found existing configuration files:
	
	I0729 03:37:15.631549    8811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/admin.conf
	I0729 03:37:15.634032    8811 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 03:37:15.634053    8811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 03:37:15.636844    8811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/kubelet.conf
	I0729 03:37:15.639766    8811 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 03:37:15.639787    8811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 03:37:15.642377    8811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/controller-manager.conf
	I0729 03:37:15.644836    8811 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 03:37:15.644853    8811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 03:37:15.647771    8811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/scheduler.conf
	I0729 03:37:15.650126    8811 kubeadm.go:163] "https://control-plane.minikube.internal:51263" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51263 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 03:37:15.650150    8811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 03:37:15.652773    8811 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 03:37:15.669441    8811 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 03:37:15.669549    8811 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 03:37:15.721234    8811 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 03:37:15.721291    8811 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 03:37:15.721345    8811 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 03:37:15.772008    8811 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 03:37:15.776253    8811 out.go:204]   - Generating certificates and keys ...
	I0729 03:37:15.776289    8811 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 03:37:15.776322    8811 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 03:37:15.776398    8811 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 03:37:15.776587    8811 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 03:37:15.776624    8811 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 03:37:15.776653    8811 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 03:37:15.776696    8811 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 03:37:15.776749    8811 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 03:37:15.776820    8811 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 03:37:15.776901    8811 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 03:37:15.776939    8811 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 03:37:15.776996    8811 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 03:37:15.885146    8811 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 03:37:16.110286    8811 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 03:37:16.267765    8811 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 03:37:16.331278    8811 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 03:37:16.359622    8811 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 03:37:16.360132    8811 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 03:37:16.360196    8811 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 03:37:16.461737    8811 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 03:37:12.770714    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:12.770810    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:16.464878    8811 out.go:204]   - Booting up control plane ...
	I0729 03:37:16.464949    8811 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 03:37:16.464989    8811 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 03:37:16.465022    8811 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 03:37:16.465074    8811 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 03:37:16.465146    8811 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 03:37:20.965865    8811 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504073 seconds
	I0729 03:37:20.965961    8811 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 03:37:20.972001    8811 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 03:37:21.487266    8811 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 03:37:21.487377    8811 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-376000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 03:37:21.994780    8811 kubeadm.go:310] [bootstrap-token] Using token: wqt6bu.fgmw34p07c6uokt1
	I0729 03:37:17.771813    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:17.771838    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:21.997500    8811 out.go:204]   - Configuring RBAC rules ...
	I0729 03:37:21.997606    8811 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 03:37:21.998302    8811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 03:37:22.005711    8811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 03:37:22.007354    8811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 03:37:22.008768    8811 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 03:37:22.009938    8811 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 03:37:22.014961    8811 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 03:37:22.194268    8811 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 03:37:22.401448    8811 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 03:37:22.402018    8811 kubeadm.go:310] 
	I0729 03:37:22.402054    8811 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 03:37:22.402058    8811 kubeadm.go:310] 
	I0729 03:37:22.402115    8811 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 03:37:22.402145    8811 kubeadm.go:310] 
	I0729 03:37:22.402216    8811 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 03:37:22.402254    8811 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 03:37:22.402282    8811 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 03:37:22.402287    8811 kubeadm.go:310] 
	I0729 03:37:22.402315    8811 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 03:37:22.402317    8811 kubeadm.go:310] 
	I0729 03:37:22.402345    8811 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 03:37:22.402349    8811 kubeadm.go:310] 
	I0729 03:37:22.402395    8811 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 03:37:22.402447    8811 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 03:37:22.402491    8811 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 03:37:22.402493    8811 kubeadm.go:310] 
	I0729 03:37:22.402535    8811 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 03:37:22.402594    8811 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 03:37:22.402602    8811 kubeadm.go:310] 
	I0729 03:37:22.402645    8811 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wqt6bu.fgmw34p07c6uokt1 \
	I0729 03:37:22.402699    8811 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:56da7cbeac47112c1517f3d5f4aec3aafe98daa728e4f5de9707d5d85e63df76 \
	I0729 03:37:22.402715    8811 kubeadm.go:310] 	--control-plane 
	I0729 03:37:22.402718    8811 kubeadm.go:310] 
	I0729 03:37:22.402761    8811 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 03:37:22.402765    8811 kubeadm.go:310] 
	I0729 03:37:22.402815    8811 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wqt6bu.fgmw34p07c6uokt1 \
	I0729 03:37:22.402874    8811 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:56da7cbeac47112c1517f3d5f4aec3aafe98daa728e4f5de9707d5d85e63df76 
	I0729 03:37:22.402934    8811 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 03:37:22.402942    8811 cni.go:84] Creating CNI manager for ""
	I0729 03:37:22.402950    8811 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:37:22.407031    8811 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 03:37:22.410015    8811 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 03:37:22.412897    8811 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 03:37:22.417848    8811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 03:37:22.417894    8811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 03:37:22.417922    8811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-376000 minikube.k8s.io/updated_at=2024_07_29T03_37_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=running-upgrade-376000 minikube.k8s.io/primary=true
	I0729 03:37:22.461426    8811 kubeadm.go:1113] duration metric: took 43.573375ms to wait for elevateKubeSystemPrivileges
	I0729 03:37:22.461439    8811 ops.go:34] apiserver oom_adj: -16
	I0729 03:37:22.461447    8811 kubeadm.go:394] duration metric: took 4m12.260784125s to StartCluster
	I0729 03:37:22.461465    8811 settings.go:142] acquiring lock: {Name:mk5fe4de5daf4f1a01814785384dc93f95ac574d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:37:22.461636    8811 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:37:22.462015    8811 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/kubeconfig: {Name:mk88e6cb321d16f76049e5804261f3b045a9d412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:37:22.462220    8811 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:37:22.462246    8811 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 03:37:22.462286    8811 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-376000"
	I0729 03:37:22.462298    8811 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-376000"
	I0729 03:37:22.462309    8811 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-376000"
	I0729 03:37:22.462299    8811 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-376000"
	W0729 03:37:22.462342    8811 addons.go:243] addon storage-provisioner should already be in state true
	I0729 03:37:22.462316    8811 config.go:182] Loaded profile config "running-upgrade-376000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:37:22.462355    8811 host.go:66] Checking if "running-upgrade-376000" exists ...
	I0729 03:37:22.463160    8811 kapi.go:59] client config for running-upgrade-376000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/running-upgrade-376000/client.key", CAFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10615c080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 03:37:22.463279    8811 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-376000"
	W0729 03:37:22.463284    8811 addons.go:243] addon default-storageclass should already be in state true
	I0729 03:37:22.463290    8811 host.go:66] Checking if "running-upgrade-376000" exists ...
	I0729 03:37:22.466016    8811 out.go:177] * Verifying Kubernetes components...
	I0729 03:37:22.466412    8811 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 03:37:22.470178    8811 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 03:37:22.470185    8811 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/running-upgrade-376000/id_rsa Username:docker}
	I0729 03:37:22.473052    8811 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:37:22.772730    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:22.772752    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:22.475998    8811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:37:22.479066    8811 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 03:37:22.479072    8811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 03:37:22.479077    8811 sshutil.go:53] new ssh client: &{IP:localhost Port:51231 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/running-upgrade-376000/id_rsa Username:docker}
	I0729 03:37:22.562758    8811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 03:37:22.567674    8811 api_server.go:52] waiting for apiserver process to appear ...
	I0729 03:37:22.567720    8811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:37:22.571544    8811 api_server.go:72] duration metric: took 109.3155ms to wait for apiserver process to appear ...
	I0729 03:37:22.571553    8811 api_server.go:88] waiting for apiserver healthz status ...
	I0729 03:37:22.571559    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:22.609903    8811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 03:37:22.637308    8811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 03:37:27.773955    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:27.774025    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:27.573638    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:27.573725    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:32.775928    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:32.775951    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:32.574249    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:32.574271    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:37.777892    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:37.777914    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:37.574593    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:37.574623    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:42.779255    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:42.779301    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:42.575091    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:42.575144    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:47.781466    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:47.781490    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:47.575830    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:47.575880    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:52.576075    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:52.576101    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 03:37:52.935786    8811 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 03:37:52.940127    8811 out.go:177] * Enabled addons: storage-provisioner
	I0729 03:37:52.783605    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:52.783769    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:37:52.798720    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:37:52.798804    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:37:52.811337    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:37:52.811412    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:37:52.826343    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:37:52.826418    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:37:52.837106    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:37:52.837179    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:37:52.849089    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:37:52.849162    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:37:52.859254    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:37:52.859332    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:37:52.869267    8948 logs.go:276] 0 containers: []
	W0729 03:37:52.869280    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:37:52.869349    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:37:52.879396    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:37:52.879415    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:37:52.879436    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:37:52.916563    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:37:52.916572    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:37:52.959370    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:37:52.959380    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:37:52.974239    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:37:52.974253    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:37:52.985510    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:37:52.985521    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:37:52.997433    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:37:52.997446    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:37:53.012678    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:37:53.012692    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:37:53.024266    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:37:53.024275    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:37:53.040575    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:37:53.040589    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:37:53.066291    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:37:53.066298    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:37:53.070756    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:37:53.070762    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:37:53.170408    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:37:53.170419    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:37:53.184288    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:37:53.184302    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:37:53.200964    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:37:53.200975    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:37:53.214875    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:37:53.214887    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:37:53.231147    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:37:53.231162    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:37:53.242593    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:37:53.242609    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:37:55.754161    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:52.947083    8811 addons.go:510] duration metric: took 30.485438042s for enable addons: enabled=[storage-provisioner]
	I0729 03:38:00.756398    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:00.756564    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:00.770415    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:00.770503    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:00.782265    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:00.782336    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:00.796900    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:00.796975    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:00.811910    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:00.811987    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:00.822546    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:00.822615    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:00.833181    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:00.833247    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:00.843253    8948 logs.go:276] 0 containers: []
	W0729 03:38:00.843266    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:00.843324    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:00.860083    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:00.860103    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:00.860108    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:00.871648    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:00.871659    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:00.883596    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:00.883606    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:00.896484    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:00.896497    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:00.907321    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:00.907335    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:00.930597    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:00.930604    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:00.942112    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:00.942129    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:00.956616    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:00.956629    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:00.970214    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:00.970224    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:01.009154    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:01.009164    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:01.024424    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:01.024433    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:01.035570    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:01.035580    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:01.053786    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:01.053796    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:01.093829    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:01.093839    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:01.098651    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:01.098657    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:01.114080    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:01.114090    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:01.125621    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:01.125630    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:37:57.577025    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:57.577069    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:03.664761    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:02.578327    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:02.578377    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:08.667087    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:08.667205    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:08.681875    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:08.681951    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:08.694131    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:08.694205    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:08.704765    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:08.704832    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:08.715356    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:08.715419    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:08.727886    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:08.727955    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:08.738646    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:08.738707    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:08.748884    8948 logs.go:276] 0 containers: []
	W0729 03:38:08.748896    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:08.748946    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:08.759620    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:08.759638    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:08.759643    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:38:08.795993    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:08.796001    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:08.799904    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:08.799911    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:08.813354    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:08.813366    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:08.825922    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:08.825931    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:08.863178    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:08.863190    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:08.878048    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:08.878062    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:08.893382    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:08.893392    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:08.904893    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:08.904907    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:08.928914    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:08.928920    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:08.943217    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:08.943229    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:08.980777    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:08.980791    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:08.991464    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:08.991476    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:09.009019    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:09.009032    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:09.020054    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:09.020067    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:09.032471    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:09.032484    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:09.044489    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:09.044503    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:11.558977    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:07.579902    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:07.579930    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:16.561238    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:16.561348    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:16.574273    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:16.574335    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:16.584934    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:16.585006    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:16.595077    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:16.595146    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:16.605362    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:16.605439    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:16.616761    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:16.616830    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:16.627747    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:16.627815    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:16.642133    8948 logs.go:276] 0 containers: []
	W0729 03:38:16.642148    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:16.642204    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:16.652568    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:16.652584    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:16.652589    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:16.676111    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:16.676119    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:38:16.712343    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:16.712350    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:16.747634    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:16.747644    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:16.760461    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:16.760474    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:16.775058    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:16.775068    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:16.786905    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:16.786916    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:16.825847    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:16.825861    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:16.838969    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:16.838979    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:16.853327    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:16.853342    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:16.867751    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:16.867767    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:16.880368    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:16.880382    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:16.897798    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:16.897808    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:16.908953    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:16.908964    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:16.913764    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:16.913771    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:16.927787    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:16.927806    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:16.939548    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:16.939562    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:12.582011    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:12.582064    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:19.453386    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:17.584138    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:17.584178    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:24.454474    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:24.454676    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:24.473680    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:24.473772    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:24.490504    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:24.490588    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:24.504283    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:24.504355    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:24.514483    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:24.514566    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:24.525305    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:24.525387    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:24.536061    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:24.536131    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:24.546634    8948 logs.go:276] 0 containers: []
	W0729 03:38:24.546645    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:24.546696    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:24.557716    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:24.557734    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:24.557740    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:24.596477    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:24.596496    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:24.608681    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:24.608696    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:24.635940    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:24.635949    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:24.647772    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:24.647783    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:38:24.685979    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:24.685988    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:24.719719    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:24.719730    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:24.733358    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:24.733371    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:24.744763    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:24.744774    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:24.749375    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:24.749383    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:24.763140    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:24.763149    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:24.774815    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:24.774826    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:24.792541    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:24.792555    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:24.803660    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:24.803669    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:24.817528    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:24.817539    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:24.833261    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:24.833270    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:24.845162    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:24.845172    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:22.586328    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:22.586500    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:22.597232    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:38:22.597301    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:22.607766    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:38:22.607839    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:22.618866    8811 logs.go:276] 2 containers: [feaa048ca969 5d89100d144a]
	I0729 03:38:22.618935    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:22.629250    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:38:22.629315    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:22.639076    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:38:22.639138    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:22.650226    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:38:22.650288    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:22.660284    8811 logs.go:276] 0 containers: []
	W0729 03:38:22.660299    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:22.660354    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:22.671824    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:38:22.671837    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:38:22.671843    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:22.683962    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:22.683973    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:38:22.715402    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:38:22.715501    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:38:22.716874    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:22.716883    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:22.754054    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:38:22.754067    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:38:22.768275    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:38:22.768289    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:38:22.779804    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:38:22.779818    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:38:22.791829    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:38:22.791843    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:38:22.809188    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:22.809202    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:22.834236    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:22.834250    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:22.838552    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:38:22.838561    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:38:22.852466    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:38:22.852475    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:38:22.864053    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:38:22.864063    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:38:22.878436    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:38:22.878446    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:38:22.890931    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:38:22.890940    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:38:22.890970    8811 out.go:239] X Problems detected in kubelet:
	W0729 03:38:22.890975    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:38:22.890979    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:38:22.890984    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:38:22.891003    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:38:27.361263    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:32.362569    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:32.362784    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:32.382778    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:32.382865    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:32.397090    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:32.397172    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:32.410043    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:32.410113    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:32.423496    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:32.423570    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:32.435915    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:32.435986    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:32.446494    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:32.446561    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:32.461652    8948 logs.go:276] 0 containers: []
	W0729 03:38:32.461670    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:32.461730    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:32.471928    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:32.471949    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:32.471954    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:32.489047    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:32.489057    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:32.504989    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:32.505002    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:32.529180    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:32.529190    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:38:32.568132    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:32.568156    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:32.572611    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:32.572619    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:32.586562    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:32.586575    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:32.597858    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:32.597871    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:32.633664    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:32.633679    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:32.649661    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:32.649673    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:32.661743    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:32.661757    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:32.676526    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:32.676536    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:32.714330    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:32.714340    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:32.728454    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:32.728464    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:32.739278    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:32.739292    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:32.751084    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:32.751094    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:32.762680    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:32.762690    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:35.282307    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:32.894932    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:40.282799    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:40.283038    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:40.309243    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:40.309364    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:40.326341    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:40.326424    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:40.339848    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:40.339919    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:40.352511    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:40.352585    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:40.362731    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:40.362799    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:40.372730    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:40.372804    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:40.387958    8948 logs.go:276] 0 containers: []
	W0729 03:38:40.387971    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:40.388027    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:40.403348    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:40.403368    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:40.403374    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:40.414346    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:40.414357    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:40.429765    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:40.429779    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:40.447189    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:40.447202    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:40.451439    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:40.451446    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:40.498948    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:40.498961    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:40.514570    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:40.514583    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:40.553173    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:40.553184    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:40.567742    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:40.567754    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:40.579097    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:40.579109    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:40.590977    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:40.590989    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:40.605235    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:40.605249    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:40.617339    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:40.617351    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:40.640831    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:40.640838    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:40.653564    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:40.653577    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:40.665070    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:40.665080    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:38:40.704001    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:40.704034    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:37.897086    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:37.897229    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:37.910328    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:38:37.910404    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:37.922394    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:38:37.922460    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:37.932617    8811 logs.go:276] 2 containers: [feaa048ca969 5d89100d144a]
	I0729 03:38:37.932696    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:37.943363    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:38:37.943425    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:37.953551    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:38:37.953613    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:37.964265    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:38:37.964325    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:37.974045    8811 logs.go:276] 0 containers: []
	W0729 03:38:37.974057    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:37.974117    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:37.985080    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:38:37.985093    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:37.985098    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:38:38.016085    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:38:38.016182    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:38:38.017466    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:38.017470    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:38.051172    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:38:38.051183    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:38:38.069018    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:38:38.069027    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:38:38.080480    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:38:38.080490    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:38:38.097820    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:38:38.097831    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:38:38.109372    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:38:38.109382    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:38.120963    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:38.120972    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:38.125471    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:38:38.125480    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:38:38.140602    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:38:38.140614    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:38:38.152032    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:38:38.152042    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:38:38.164563    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:38:38.164572    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:38:38.179603    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:38.179616    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:38.202583    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:38:38.202591    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:38:38.202614    8811 out.go:239] X Problems detected in kubelet:
	W0729 03:38:38.202618    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:38:38.202633    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:38:38.202638    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:38:38.202642    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:38:43.218587    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:48.221003    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:48.221296    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:48.251954    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:48.252084    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:48.271648    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:48.271763    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:48.286236    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:48.286308    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:48.298094    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:48.298170    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:48.308610    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:48.308697    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:48.323662    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:48.323724    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:48.334558    8948 logs.go:276] 0 containers: []
	W0729 03:38:48.334567    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:48.334623    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:48.345549    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:48.345567    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:48.345573    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:48.380130    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:48.380145    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:48.397974    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:48.397988    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:48.422060    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:48.422070    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:48.426332    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:48.426339    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:48.464751    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:48.464764    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:48.479949    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:48.479961    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:48.491343    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:48.491353    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:48.505263    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:48.505278    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:48.518416    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:48.518432    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:48.535375    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:48.535389    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:48.550464    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:48.550474    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:48.567938    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:48.567953    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:48.583606    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:48.583621    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:48.595660    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:48.595671    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:48.607015    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:48.607025    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:38:48.644911    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:48.644919    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:51.159682    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:48.205265    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:56.162140    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:56.162499    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:56.194304    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:56.194437    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:56.212612    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:56.212713    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:56.226138    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:56.226217    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:56.239077    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:56.239152    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:56.249731    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:56.249806    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:56.260876    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:56.260941    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:56.271491    8948 logs.go:276] 0 containers: []
	W0729 03:38:56.271504    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:56.271565    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:56.282467    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:56.282487    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:56.282493    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:56.286773    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:56.286782    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:56.305616    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:56.305630    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:56.344922    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:56.344950    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:56.359506    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:56.359516    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:56.370782    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:56.370793    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:56.393527    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:56.393533    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:56.410413    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:56.410426    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:56.430948    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:56.430958    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:56.445206    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:56.445218    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:56.457796    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:56.457810    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:56.469544    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:56.469554    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:56.481607    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:56.481619    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:56.493837    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:56.493848    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:38:56.531303    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:56.531314    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:56.570890    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:56.570903    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:56.585359    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:56.585370    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:53.207565    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:53.207765    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:53.229191    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:38:53.229287    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:53.244264    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:38:53.244346    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:53.256987    8811 logs.go:276] 2 containers: [feaa048ca969 5d89100d144a]
	I0729 03:38:53.257063    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:53.267453    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:38:53.267524    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:53.278601    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:38:53.278668    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:53.289163    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:38:53.289231    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:53.299592    8811 logs.go:276] 0 containers: []
	W0729 03:38:53.299604    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:53.299658    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:53.310237    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:38:53.310252    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:38:53.310257    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:38:53.321932    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:38:53.321942    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:38:53.337624    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:38:53.337636    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:38:53.348883    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:53.348896    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:53.373275    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:53.373282    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:53.377570    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:53.377575    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:53.414662    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:38:53.414673    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:38:53.428980    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:38:53.428994    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:38:53.443800    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:38:53.443811    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:38:53.460617    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:38:53.460631    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:53.472344    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:53.472355    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:38:53.504551    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:38:53.504651    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:38:53.505988    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:38:53.505994    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:38:53.520496    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:38:53.520509    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:38:53.535678    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:38:53.535688    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:38:53.535713    8811 out.go:239] X Problems detected in kubelet:
	W0729 03:38:53.535719    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:38:53.535724    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:38:53.535728    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:38:53.535731    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:38:59.098856    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:04.101075    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:04.101284    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:04.119980    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:04.120068    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:04.133873    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:04.133952    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:04.145534    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:04.145596    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:04.157221    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:04.157297    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:04.167235    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:04.167306    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:04.177464    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:04.177535    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:04.187834    8948 logs.go:276] 0 containers: []
	W0729 03:39:04.187847    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:04.187902    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:04.198427    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:04.198447    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:04.198452    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:04.209729    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:04.209739    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:04.222241    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:04.222252    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:04.239493    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:04.239503    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:04.250752    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:04.250762    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:04.287612    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:04.287630    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:04.292393    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:04.292403    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:04.329259    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:04.329275    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:04.343435    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:04.343449    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:04.358210    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:04.358225    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:04.371358    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:04.371367    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:04.395577    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:04.395584    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:04.430796    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:04.430807    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:04.443139    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:04.443153    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:04.462279    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:04.462294    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:04.476422    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:04.476435    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:39:04.487819    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:04.487832    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:07.004245    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:03.538388    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:12.006555    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:12.006703    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:12.018711    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:12.018792    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:12.034047    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:12.034116    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:12.044709    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:12.044796    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:08.540696    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:08.540939    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:08.564201    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:39:08.564292    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:08.579492    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:39:08.579563    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:08.592371    8811 logs.go:276] 2 containers: [feaa048ca969 5d89100d144a]
	I0729 03:39:08.592446    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:08.607244    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:39:08.607314    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:08.623052    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:39:08.623123    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:08.633808    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:39:08.633877    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:08.644023    8811 logs.go:276] 0 containers: []
	W0729 03:39:08.644034    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:08.644091    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:08.655192    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:39:08.655208    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:08.655214    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:39:08.689792    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:08.689890    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:08.691176    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:39:08.691193    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:39:08.705690    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:39:08.705701    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:39:08.719688    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:39:08.719698    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:39:08.737378    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:39:08.737387    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:08.748316    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:39:08.748326    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:39:08.760824    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:39:08.760837    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:39:08.772300    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:08.772310    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:08.797507    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:08.797515    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:08.801687    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:08.801696    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:08.841515    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:39:08.841527    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:39:08.857855    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:39:08.857866    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:39:08.869665    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:39:08.869674    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:39:08.892674    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:08.892684    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:39:08.892710    8811 out.go:239] X Problems detected in kubelet:
	W0729 03:39:08.892715    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:08.892719    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:08.892723    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:08.892726    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:39:12.055785    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:12.055856    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:12.066243    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:12.066311    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:12.077247    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:12.077317    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:12.087650    8948 logs.go:276] 0 containers: []
	W0729 03:39:12.087660    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:12.087721    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:12.098384    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:12.098405    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:12.098410    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:12.121504    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:12.121515    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:12.126028    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:12.126036    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:12.162908    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:12.162918    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:12.173882    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:12.173893    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:12.185693    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:12.185704    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:12.204090    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:12.204100    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:12.217050    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:12.217060    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:12.253336    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:12.253344    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:12.274965    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:12.274979    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:12.288661    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:12.288674    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:12.303394    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:12.303404    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:12.315852    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:12.315866    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:12.351972    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:12.351985    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:12.367216    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:12.367227    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:12.379050    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:12.379062    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:12.390255    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:12.390266    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:39:14.904576    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:19.905573    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:19.905785    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:19.923356    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:19.923444    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:19.940971    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:19.941049    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:19.952511    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:19.952581    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:19.962753    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:19.962826    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:19.973209    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:19.973278    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:19.985118    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:19.985208    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:19.995697    8948 logs.go:276] 0 containers: []
	W0729 03:39:19.995710    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:19.995765    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:20.006412    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:20.006427    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:20.006434    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:20.044096    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:20.044111    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:20.061773    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:20.061783    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:20.076110    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:20.076123    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:20.091313    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:20.091326    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:20.103324    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:20.103338    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:20.141370    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:20.141378    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:20.145813    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:20.145819    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:20.157347    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:20.157361    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:20.169443    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:20.169457    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:20.181022    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:20.181031    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:20.194434    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:20.194450    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:39:20.207360    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:20.207371    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:20.230465    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:20.230475    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:20.248176    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:20.248191    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:20.259418    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:20.259430    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:20.298348    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:20.298362    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:18.896679    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:22.817252    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:23.899011    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:23.899318    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:23.949222    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:39:23.949345    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:23.965777    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:39:23.965862    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:23.978864    8811 logs.go:276] 2 containers: [feaa048ca969 5d89100d144a]
	I0729 03:39:23.978941    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:23.996366    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:39:23.996430    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:24.006927    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:39:24.006999    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:24.017465    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:39:24.017537    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:24.027100    8811 logs.go:276] 0 containers: []
	W0729 03:39:24.027113    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:24.027171    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:24.038319    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:39:24.038334    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:39:24.038341    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:39:24.050280    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:39:24.050292    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:39:24.064843    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:39:24.064854    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:39:24.076357    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:39:24.076369    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:39:24.094811    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:24.094821    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:24.119732    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:24.119741    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:24.154317    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:39:24.154331    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:39:24.168746    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:39:24.168757    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:39:24.182436    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:39:24.182446    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:24.194055    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:39:24.194067    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:39:24.205492    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:24.205503    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:39:24.237144    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:24.237243    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:24.238632    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:24.238640    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:24.243254    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:39:24.243261    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:39:24.254666    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:24.254679    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:39:24.254703    8811 out.go:239] X Problems detected in kubelet:
	W0729 03:39:24.254708    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:24.254712    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:24.254717    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:24.254719    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:39:27.819419    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:27.819636    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:27.836122    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:27.836220    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:27.848783    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:27.848847    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:27.860228    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:27.860290    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:27.870531    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:27.870601    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:27.880932    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:27.880999    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:27.891511    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:27.891577    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:27.901403    8948 logs.go:276] 0 containers: []
	W0729 03:39:27.901416    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:27.901471    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:27.912155    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:27.912173    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:27.912178    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:27.929956    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:27.929966    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:39:27.941193    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:27.941206    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:27.978377    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:27.978386    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:28.012489    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:28.012501    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:28.026749    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:28.026761    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:28.030968    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:28.030975    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:28.044336    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:28.044346    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:28.056458    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:28.056468    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:28.068270    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:28.068280    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:28.080415    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:28.080425    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:28.091879    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:28.091889    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:28.129390    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:28.129402    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:28.142971    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:28.142981    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:28.157620    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:28.157630    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:28.172699    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:28.172708    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:28.195406    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:28.195413    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:30.708341    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:35.710639    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:35.710825    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:35.736977    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:35.737078    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:35.751796    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:35.751864    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:35.765670    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:35.765742    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:35.783618    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:35.783696    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:35.794121    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:35.794182    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:35.806536    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:35.806610    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:35.816582    8948 logs.go:276] 0 containers: []
	W0729 03:39:35.816593    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:35.816649    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:35.826905    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:35.826926    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:35.826931    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:35.851517    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:35.851527    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:35.889638    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:35.889646    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:35.931552    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:35.931563    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:35.977625    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:35.977636    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:35.992939    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:35.992955    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:36.005262    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:36.005272    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:36.016634    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:36.016648    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:36.031403    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:36.031413    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:36.042473    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:36.042487    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:36.057966    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:36.057980    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:39:36.069372    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:36.069383    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:36.082793    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:36.082803    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:36.095788    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:36.095798    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:36.099886    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:36.099893    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:36.112608    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:36.112619    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:36.129493    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:36.129504    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:34.258675    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:38.643256    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:39.260907    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:39.261052    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:39.272704    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:39:39.272776    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:39.283637    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:39:39.283721    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:39.294761    8811 logs.go:276] 4 containers: [f2e71a487c88 84567be55aaf feaa048ca969 5d89100d144a]
	I0729 03:39:39.294837    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:39.309486    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:39:39.309554    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:39.320369    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:39:39.320440    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:39.331862    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:39:39.331932    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:39.342446    8811 logs.go:276] 0 containers: []
	W0729 03:39:39.342457    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:39.342513    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:39.352809    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:39:39.352824    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:39.352830    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:39.357310    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:39.357318    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:39.425916    8811 logs.go:123] Gathering logs for coredns [f2e71a487c88] ...
	I0729 03:39:39.425929    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e71a487c88"
	I0729 03:39:39.447968    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:39:39.447980    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:39:39.468099    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:39:39.468110    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:39:39.479040    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:39.479050    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:39.506951    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:39:39.506961    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:39:39.521632    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:39:39.521644    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:39:39.536664    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:39:39.536678    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:39:39.555650    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:39:39.555664    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:39:39.567196    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:39:39.567211    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:39:39.581061    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:39:39.581075    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:39.593310    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:39.593319    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:39:39.626767    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:39.626866    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:39.628200    8811 logs.go:123] Gathering logs for coredns [84567be55aaf] ...
	I0729 03:39:39.628208    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84567be55aaf"
	I0729 03:39:39.640222    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:39:39.640233    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:39:39.651363    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:39.651373    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:39:39.651399    8811 out.go:239] X Problems detected in kubelet:
	W0729 03:39:39.651405    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:39.651462    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:39.651501    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:39.651505    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:39:43.645480    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:43.645925    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:43.682295    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:43.682438    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:43.702195    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:43.702318    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:43.719911    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:43.719989    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:43.732007    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:43.732075    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:43.742388    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:43.742466    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:43.753578    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:43.753647    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:43.771313    8948 logs.go:276] 0 containers: []
	W0729 03:39:43.771327    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:43.771390    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:43.782442    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:43.782463    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:43.782468    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:43.816981    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:43.816994    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:43.832217    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:43.832228    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:43.847669    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:43.847686    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:39:43.859813    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:43.859827    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:43.878041    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:43.878052    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:43.891625    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:43.891636    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:43.914967    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:43.914975    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:43.926524    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:43.926535    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:43.930593    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:43.930598    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:43.942154    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:43.942164    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:43.954199    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:43.954211    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:43.969897    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:43.969907    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:43.981930    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:43.981941    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:44.020108    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:44.020116    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:44.059055    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:44.059065    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:44.073409    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:44.073423    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:46.587417    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:51.589737    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:51.589961    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:51.609531    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:51.609625    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:51.624252    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:51.624330    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:51.643933    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:51.644012    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:51.655021    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:51.655099    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:51.665478    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:51.665549    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:51.676018    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:51.676094    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:51.686474    8948 logs.go:276] 0 containers: []
	W0729 03:39:51.686485    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:51.686544    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:51.696974    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:51.696991    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:51.696997    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:51.712939    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:51.712959    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:51.728614    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:51.728628    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:51.746906    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:51.746916    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:51.760901    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:51.760911    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:51.799986    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:51.799994    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:51.840445    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:51.840460    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:51.854809    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:51.854820    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:51.869156    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:51.869165    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:39:51.883062    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:51.883073    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:51.900360    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:51.900372    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:51.925965    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:51.925984    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:51.930544    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:51.930554    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:51.968725    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:51.968735    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:51.983179    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:51.983188    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:51.998415    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:51.998426    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:52.014730    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:52.014744    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:49.653769    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:54.528521    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:54.656365    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:54.656550    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:54.680775    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:39:54.680869    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:54.694329    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:39:54.694395    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:54.705837    8811 logs.go:276] 4 containers: [f2e71a487c88 84567be55aaf feaa048ca969 5d89100d144a]
	I0729 03:39:54.705909    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:54.716536    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:39:54.716596    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:54.728534    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:39:54.728603    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:54.740259    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:39:54.740321    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:54.754771    8811 logs.go:276] 0 containers: []
	W0729 03:39:54.754783    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:54.754839    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:54.765270    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:39:54.765286    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:39:54.765292    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:39:54.780068    8811 logs.go:123] Gathering logs for coredns [f2e71a487c88] ...
	I0729 03:39:54.780080    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e71a487c88"
	I0729 03:39:54.791774    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:39:54.791786    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:39:54.803678    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:39:54.803688    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:39:54.817825    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:39:54.817839    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:39:54.830081    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:54.830092    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:54.834943    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:54.834949    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:54.859792    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:54.859801    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:54.895802    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:39:54.895813    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:39:54.907675    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:39:54.907686    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:39:54.927513    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:54.927528    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:39:54.959117    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:54.959215    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:54.960512    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:39:54.960518    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:39:54.975401    8811 logs.go:123] Gathering logs for coredns [84567be55aaf] ...
	I0729 03:39:54.975411    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84567be55aaf"
	I0729 03:39:54.987213    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:39:54.987224    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:39:54.999101    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:39:54.999111    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:55.010280    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:55.010290    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:39:55.010318    8811 out.go:239] X Problems detected in kubelet:
	W0729 03:39:55.010322    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:39:55.010326    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:39:55.010329    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:39:55.010332    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:39:59.530860    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:59.531102    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:59.548870    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:59.548955    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:59.562670    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:59.562748    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:59.573528    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:59.573595    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:59.584307    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:59.584376    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:59.594444    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:59.594510    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:59.605062    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:59.605128    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:59.615959    8948 logs.go:276] 0 containers: []
	W0729 03:39:59.615970    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:59.616021    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:59.626272    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:59.626291    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:59.626296    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:59.640246    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:59.640257    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:59.655506    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:59.655518    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:59.670025    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:59.670036    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:59.693801    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:59.693808    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:59.731914    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:59.731927    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:59.747295    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:59.747305    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:59.764661    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:59.764670    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:59.776567    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:59.776578    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:59.781099    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:59.781109    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:59.816481    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:59.816492    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:59.828646    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:59.828657    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:59.840947    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:59.840960    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:59.853946    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:59.853956    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:59.867247    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:59.867260    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:59.906099    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:59.906108    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:59.917952    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:59.917963    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:40:02.435583    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:05.014218    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:07.436933    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:07.437164    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:07.456208    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:40:07.456294    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:07.470418    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:40:07.470496    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:07.483340    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:40:07.483410    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:07.495661    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:40:07.495724    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:07.512556    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:40:07.512627    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:07.523445    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:40:07.523511    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:07.536070    8948 logs.go:276] 0 containers: []
	W0729 03:40:07.536085    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:07.536142    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:07.546314    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:40:07.546330    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:40:07.546336    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:40:07.560889    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:40:07.560900    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:40:07.572688    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:40:07.572699    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:07.584813    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:07.584825    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:07.619572    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:40:07.619587    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:40:07.634167    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:40:07.634178    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:40:07.646959    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:40:07.646972    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:40:07.664551    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:40:07.664562    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:40:07.677549    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:40:07.677558    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:40:07.691137    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:40:07.691147    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:40:07.702591    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:40:07.702603    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:40:07.719565    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:40:07.719575    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:40:07.730544    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:40:07.730556    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:40:07.741807    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:07.741819    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:07.765815    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:07.765824    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:40:07.804555    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:07.804562    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:07.808980    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:40:07.808988    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:40:10.349092    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:10.016392    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:10.016566    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:10.031234    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:40:10.031309    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:10.043274    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:40:10.043337    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:10.054519    8811 logs.go:276] 4 containers: [f2e71a487c88 84567be55aaf feaa048ca969 5d89100d144a]
	I0729 03:40:10.054597    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:10.065033    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:40:10.065104    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:10.075287    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:40:10.075344    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:10.086048    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:40:10.086120    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:10.096348    8811 logs.go:276] 0 containers: []
	W0729 03:40:10.096357    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:10.096407    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:10.106754    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:40:10.106772    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:40:10.106778    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:40:10.118724    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:40:10.118737    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:40:10.135956    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:10.135966    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:10.141169    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:40:10.141179    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:40:10.154952    8811 logs.go:123] Gathering logs for coredns [f2e71a487c88] ...
	I0729 03:40:10.154964    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e71a487c88"
	I0729 03:40:10.166858    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:40:10.166868    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:40:10.178892    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:10.178903    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:40:10.212107    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:10.212206    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:10.213543    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:40:10.213550    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:40:10.227138    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:40:10.227148    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:40:10.259716    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:40:10.259727    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:40:10.272733    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:40:10.272742    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:40:10.288099    8811 logs.go:123] Gathering logs for coredns [84567be55aaf] ...
	I0729 03:40:10.288113    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84567be55aaf"
	I0729 03:40:10.299819    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:10.299829    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:10.324844    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:40:10.324855    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:10.336234    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:10.336246    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:10.372812    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:10.372822    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:40:10.372848    8811 out.go:239] X Problems detected in kubelet:
	W0729 03:40:10.372853    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:10.372856    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:10.372860    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:10.372862    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:40:15.351194    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:15.351360    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:15.364183    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:40:15.364258    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:15.375038    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:40:15.375109    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:15.385514    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:40:15.385582    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:15.396744    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:40:15.396816    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:15.407104    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:40:15.407169    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:15.417620    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:40:15.417681    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:15.428113    8948 logs.go:276] 0 containers: []
	W0729 03:40:15.428127    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:15.428181    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:15.438650    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:40:15.438669    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:40:15.438674    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:40:15.453352    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:15.453361    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:15.475563    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:15.475569    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:40:15.512452    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:15.512462    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:15.550262    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:40:15.550274    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:40:15.588284    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:15.588296    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:15.593138    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:40:15.593146    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:40:15.607446    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:40:15.607461    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:40:15.622584    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:40:15.622593    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:40:15.635299    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:40:15.635310    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:40:15.646261    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:40:15.646273    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:15.657943    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:40:15.657952    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:40:15.671613    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:40:15.671623    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:40:15.683330    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:40:15.683339    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:40:15.696088    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:40:15.696099    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:40:15.709811    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:40:15.709821    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:40:15.728884    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:40:15.728895    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:40:18.242592    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:20.376834    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:23.245222    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:23.245531    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:23.276938    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:40:23.277060    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:23.295278    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:40:23.295358    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:23.309249    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:40:23.309324    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:23.322019    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:40:23.322139    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:23.334083    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:40:23.334156    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:23.346217    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:40:23.346291    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:23.356412    8948 logs.go:276] 0 containers: []
	W0729 03:40:23.356420    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:23.356473    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:23.367211    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:40:23.367230    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:40:23.367235    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:40:23.382015    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:40:23.382024    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:40:23.393991    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:40:23.394000    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:40:23.411347    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:23.411356    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:40:23.449709    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:23.449718    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:23.454092    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:40:23.454097    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:40:23.465693    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:40:23.465703    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:40:23.481542    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:40:23.481551    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:40:23.495091    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:40:23.495104    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:40:23.507526    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:40:23.507537    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:23.519617    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:40:23.519626    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:40:23.557299    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:40:23.557309    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:40:23.571947    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:23.571960    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:23.594596    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:23.594606    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:23.629696    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:40:23.629712    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:40:23.646728    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:40:23.646743    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:40:23.658040    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:40:23.658055    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:40:26.169708    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:25.378990    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:25.379167    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:25.391300    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:40:25.391370    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:25.402019    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:40:25.402080    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:25.412907    8811 logs.go:276] 4 containers: [f2e71a487c88 84567be55aaf feaa048ca969 5d89100d144a]
	I0729 03:40:25.412980    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:25.423321    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:40:25.423386    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:25.433929    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:40:25.433992    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:25.445332    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:40:25.445400    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:25.455416    8811 logs.go:276] 0 containers: []
	W0729 03:40:25.455426    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:25.455481    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:25.465471    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:40:25.465490    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:40:25.465495    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:40:25.476928    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:40:25.476939    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:25.489546    8811 logs.go:123] Gathering logs for coredns [f2e71a487c88] ...
	I0729 03:40:25.489557    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e71a487c88"
	I0729 03:40:25.501084    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:40:25.501096    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:40:25.515544    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:40:25.515555    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:40:25.529214    8811 logs.go:123] Gathering logs for coredns [84567be55aaf] ...
	I0729 03:40:25.529224    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84567be55aaf"
	I0729 03:40:25.541713    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:40:25.541727    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:40:25.556437    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:25.556446    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:25.592667    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:40:25.592679    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:40:25.604852    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:40:25.604863    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:40:25.619101    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:25.619111    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:25.624202    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:40:25.624207    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:40:25.636153    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:40:25.636166    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:40:25.654238    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:25.654248    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:25.678866    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:25.678873    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:40:25.711150    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:25.711247    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:25.712540    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:25.712546    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:40:25.712570    8811 out.go:239] X Problems detected in kubelet:
	W0729 03:40:25.712574    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:25.712590    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:25.712593    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:25.712596    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:40:31.172030    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:31.172245    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:31.197669    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:40:31.197784    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:31.214406    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:40:31.214487    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:31.227360    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:40:31.227421    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:31.238969    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:40:31.239043    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:31.253505    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:40:31.253568    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:31.264083    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:40:31.264151    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:31.274446    8948 logs.go:276] 0 containers: []
	W0729 03:40:31.274458    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:31.274517    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:31.288291    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:40:31.288308    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:40:31.288313    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:40:31.302840    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:40:31.302851    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:40:31.316164    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:40:31.316174    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:40:31.328193    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:40:31.328206    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:40:31.343890    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:40:31.343900    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:40:31.355533    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:40:31.355545    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:40:31.374843    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:31.374851    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:31.398244    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:31.398252    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:40:31.436313    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:31.436319    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:31.472308    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:31.472322    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:31.477370    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:40:31.477378    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:40:31.491102    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:40:31.491113    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:40:31.504668    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:40:31.504678    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:40:31.518811    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:40:31.518824    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:31.530723    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:40:31.530739    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:40:31.545257    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:40:31.545268    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:40:31.581987    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:40:31.581997    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:40:34.095973    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:35.716556    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:39.098558    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:39.098992    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:39.134533    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:40:39.134659    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:39.152410    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:40:39.152500    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:39.166084    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:40:39.166160    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:39.178013    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:40:39.178090    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:39.193723    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:40:39.193798    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:39.222026    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:40:39.222107    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:39.232904    8948 logs.go:276] 0 containers: []
	W0729 03:40:39.232917    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:39.232980    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:39.253131    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:40:39.253150    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:40:39.253155    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:40:39.267495    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:39.267506    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:39.271917    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:39.271923    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:39.306682    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:40:39.306694    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:40:39.321368    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:40:39.321378    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:40:39.333528    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:40:39.333540    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:40:39.347855    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:40:39.347865    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:40:39.367012    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:40:39.367024    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:40:39.380391    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:40:39.380401    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:40:39.392482    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:40:39.392493    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:40:39.404229    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:40:39.404241    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:39.417774    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:39.417785    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:40:39.456703    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:40:39.456715    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:40:39.493705    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:40:39.493717    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:40:39.507167    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:40:39.507180    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:40:39.519003    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:40:39.519014    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:40:39.534378    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:39.534389    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:40.718689    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:40.718836    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:40.735277    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:40:40.735354    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:40.746809    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:40:40.746877    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:40.757097    8811 logs.go:276] 4 containers: [f2e71a487c88 84567be55aaf feaa048ca969 5d89100d144a]
	I0729 03:40:40.757167    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:40.767545    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:40:40.767616    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:40.777900    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:40:40.777963    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:40.788555    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:40:40.788618    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:40.799087    8811 logs.go:276] 0 containers: []
	W0729 03:40:40.799098    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:40.799157    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:40.809091    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:40:40.809107    8811 logs.go:123] Gathering logs for coredns [f2e71a487c88] ...
	I0729 03:40:40.809112    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e71a487c88"
	I0729 03:40:40.821556    8811 logs.go:123] Gathering logs for coredns [84567be55aaf] ...
	I0729 03:40:40.821566    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84567be55aaf"
	I0729 03:40:40.833531    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:40:40.833542    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:40:40.845062    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:40:40.845074    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:40:40.856983    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:40:40.856993    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:40:40.868610    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:40:40.868620    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:40:40.882906    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:40.882918    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:40:40.914264    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:40.914362    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:40.915734    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:40.915740    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:40.951928    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:40:40.951939    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:40:40.966028    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:40:40.966037    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:40:40.979958    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:40:40.979969    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:40:40.998065    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:40.998075    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:41.002303    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:40:41.002312    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:40:41.013539    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:41.013548    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:41.037827    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:40:41.037834    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:41.049513    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:41.049525    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:40:41.049551    8811 out.go:239] X Problems detected in kubelet:
	W0729 03:40:41.049557    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:41.049560    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:41.049565    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:41.049567    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:40:42.056787    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:47.056955    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:47.057144    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:47.079482    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:40:47.079598    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:47.099186    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:40:47.099263    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:47.111007    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:40:47.111073    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:47.134595    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:40:47.134659    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:47.148542    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:40:47.148603    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:47.160557    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:40:47.160627    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:47.171166    8948 logs.go:276] 0 containers: []
	W0729 03:40:47.171179    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:47.171235    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:47.187444    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:40:47.187464    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:40:47.187470    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:47.200506    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:47.200517    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:47.205332    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:40:47.205340    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:40:47.221310    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:40:47.221321    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:40:47.233349    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:40:47.233361    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:40:47.249017    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:47.249031    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:47.283460    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:40:47.283474    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:40:47.323074    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:40:47.323092    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:40:47.334430    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:40:47.334441    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:40:47.346626    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:40:47.346635    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:40:47.360066    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:40:47.360077    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:40:47.372489    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:40:47.372503    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:40:47.389423    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:40:47.389434    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:40:47.401949    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:47.401961    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:47.425345    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:47.425355    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:40:47.464394    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:40:47.464410    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:40:47.484388    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:40:47.484400    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:40:50.001690    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:51.053503    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:55.004127    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:55.004259    8948 kubeadm.go:597] duration metric: took 4m4.215185375s to restartPrimaryControlPlane
	W0729 03:40:55.004380    8948 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 03:40:55.004435    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 03:40:56.047114    8948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.042683s)
	I0729 03:40:56.047193    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 03:40:56.052070    8948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 03:40:56.055017    8948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 03:40:56.058047    8948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 03:40:56.058057    8948 kubeadm.go:157] found existing configuration files:
	
	I0729 03:40:56.058102    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/admin.conf
	I0729 03:40:56.060786    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 03:40:56.060829    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 03:40:56.063879    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/kubelet.conf
	I0729 03:40:56.066679    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 03:40:56.066712    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 03:40:56.069744    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/controller-manager.conf
	I0729 03:40:56.072731    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 03:40:56.072771    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 03:40:56.076188    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/scheduler.conf
	I0729 03:40:56.079685    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 03:40:56.079716    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 03:40:56.082765    8948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 03:40:56.101966    8948 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 03:40:56.102059    8948 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 03:40:56.159420    8948 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 03:40:56.159550    8948 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 03:40:56.159596    8948 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 03:40:56.215937    8948 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 03:40:56.220167    8948 out.go:204]   - Generating certificates and keys ...
	I0729 03:40:56.220222    8948 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 03:40:56.220288    8948 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 03:40:56.220341    8948 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 03:40:56.220418    8948 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 03:40:56.220464    8948 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 03:40:56.220505    8948 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 03:40:56.220543    8948 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 03:40:56.220575    8948 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 03:40:56.220654    8948 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 03:40:56.220701    8948 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 03:40:56.220742    8948 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 03:40:56.220771    8948 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 03:40:56.389581    8948 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 03:40:56.471939    8948 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 03:40:56.631690    8948 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 03:40:56.866916    8948 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 03:40:56.898030    8948 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 03:40:56.898425    8948 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 03:40:56.898486    8948 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 03:40:56.979682    8948 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 03:40:56.983547    8948 out.go:204]   - Booting up control plane ...
	I0729 03:40:56.983599    8948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 03:40:56.983644    8948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 03:40:56.983711    8948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 03:40:56.983753    8948 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 03:40:56.984198    8948 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 03:40:56.055061    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:56.055120    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:56.066393    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:40:56.066456    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:56.078365    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:40:56.078425    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:56.089771    8811 logs.go:276] 4 containers: [f2e71a487c88 84567be55aaf feaa048ca969 5d89100d144a]
	I0729 03:40:56.089845    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:56.100909    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:40:56.100980    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:56.112007    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:40:56.112077    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:56.129610    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:40:56.129679    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:56.140981    8811 logs.go:276] 0 containers: []
	W0729 03:40:56.140998    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:56.141059    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:56.152245    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:40:56.152264    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:40:56.152269    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:40:56.167713    8811 logs.go:123] Gathering logs for coredns [f2e71a487c88] ...
	I0729 03:40:56.167728    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e71a487c88"
	I0729 03:40:56.181451    8811 logs.go:123] Gathering logs for coredns [84567be55aaf] ...
	I0729 03:40:56.181463    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84567be55aaf"
	I0729 03:40:56.194227    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:40:56.194241    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:40:56.207502    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:40:56.207514    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:40:56.223740    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:40:56.223749    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:40:56.235942    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:56.235952    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:56.240510    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:40:56.240517    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:40:56.254799    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:40:56.254807    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:40:56.269280    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:40:56.269293    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:40:56.288944    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:56.288958    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:40:56.322253    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:56.322351    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:56.323728    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:56.323734    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:56.359258    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:40:56.359269    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:40:56.371456    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:56.371468    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:56.396670    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:40:56.396676    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:56.408126    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:56.408137    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:40:56.408166    8811 out.go:239] X Problems detected in kubelet:
	W0729 03:40:56.408172    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:40:56.408176    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:40:56.408182    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:40:56.408185    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:41:01.488209    8948 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503852 seconds
	I0729 03:41:01.488277    8948 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 03:41:01.491885    8948 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 03:41:01.999675    8948 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 03:41:01.999781    8948 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-590000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 03:41:02.503775    8948 kubeadm.go:310] [bootstrap-token] Using token: k23ilj.fm7zinf82r1k73h9
	I0729 03:41:02.510135    8948 out.go:204]   - Configuring RBAC rules ...
	I0729 03:41:02.510191    8948 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 03:41:02.510242    8948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 03:41:02.516889    8948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 03:41:02.517686    8948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 03:41:02.518481    8948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 03:41:02.519287    8948 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 03:41:02.523451    8948 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 03:41:02.726941    8948 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 03:41:02.907505    8948 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 03:41:02.908027    8948 kubeadm.go:310] 
	I0729 03:41:02.908062    8948 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 03:41:02.908065    8948 kubeadm.go:310] 
	I0729 03:41:02.908101    8948 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 03:41:02.908105    8948 kubeadm.go:310] 
	I0729 03:41:02.908121    8948 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 03:41:02.908151    8948 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 03:41:02.908190    8948 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 03:41:02.908213    8948 kubeadm.go:310] 
	I0729 03:41:02.908272    8948 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 03:41:02.908278    8948 kubeadm.go:310] 
	I0729 03:41:02.908358    8948 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 03:41:02.908362    8948 kubeadm.go:310] 
	I0729 03:41:02.908420    8948 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 03:41:02.908476    8948 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 03:41:02.908533    8948 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 03:41:02.908537    8948 kubeadm.go:310] 
	I0729 03:41:02.908575    8948 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 03:41:02.908615    8948 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 03:41:02.908620    8948 kubeadm.go:310] 
	I0729 03:41:02.908726    8948 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k23ilj.fm7zinf82r1k73h9 \
	I0729 03:41:02.908798    8948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:56da7cbeac47112c1517f3d5f4aec3aafe98daa728e4f5de9707d5d85e63df76 \
	I0729 03:41:02.908808    8948 kubeadm.go:310] 	--control-plane 
	I0729 03:41:02.908810    8948 kubeadm.go:310] 
	I0729 03:41:02.908867    8948 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 03:41:02.908870    8948 kubeadm.go:310] 
	I0729 03:41:02.908992    8948 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k23ilj.fm7zinf82r1k73h9 \
	I0729 03:41:02.909063    8948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:56da7cbeac47112c1517f3d5f4aec3aafe98daa728e4f5de9707d5d85e63df76 
	I0729 03:41:02.909132    8948 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 03:41:02.909142    8948 cni.go:84] Creating CNI manager for ""
	I0729 03:41:02.909164    8948 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:41:02.913424    8948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 03:41:02.920391    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 03:41:02.924058    8948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 03:41:02.928901    8948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 03:41:02.928977    8948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-590000 minikube.k8s.io/updated_at=2024_07_29T03_41_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=stopped-upgrade-590000 minikube.k8s.io/primary=true
	I0729 03:41:02.928981    8948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 03:41:02.962443    8948 kubeadm.go:1113] duration metric: took 33.499667ms to wait for elevateKubeSystemPrivileges
	I0729 03:41:02.981196    8948 ops.go:34] apiserver oom_adj: -16
	I0729 03:41:02.981303    8948 kubeadm.go:394] duration metric: took 4m12.205867791s to StartCluster
	I0729 03:41:02.981318    8948 settings.go:142] acquiring lock: {Name:mk5fe4de5daf4f1a01814785384dc93f95ac574d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:41:02.981407    8948 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:41:02.981809    8948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/kubeconfig: {Name:mk88e6cb321d16f76049e5804261f3b045a9d412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:41:02.982025    8948 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:41:02.982043    8948 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 03:41:02.982083    8948 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-590000"
	I0729 03:41:02.982092    8948 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-590000"
	I0729 03:41:02.982095    8948 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-590000"
	W0729 03:41:02.982144    8948 addons.go:243] addon storage-provisioner should already be in state true
	I0729 03:41:02.982103    8948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-590000"
	I0729 03:41:02.982156    8948 host.go:66] Checking if "stopped-upgrade-590000" exists ...
	I0729 03:41:02.982124    8948 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:41:02.986354    8948 out.go:177] * Verifying Kubernetes components...
	I0729 03:41:02.987095    8948 kapi.go:59] client config for stopped-upgrade-590000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/client.key", CAFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b60080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 03:41:02.990717    8948 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-590000"
	W0729 03:41:02.990722    8948 addons.go:243] addon default-storageclass should already be in state true
	I0729 03:41:02.990729    8948 host.go:66] Checking if "stopped-upgrade-590000" exists ...
	I0729 03:41:02.991247    8948 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 03:41:02.991252    8948 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 03:41:02.991257    8948 sshutil.go:53] new ssh client: &{IP:localhost Port:51434 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/id_rsa Username:docker}
	I0729 03:41:02.994329    8948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:41:02.998439    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:41:03.002406    8948 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 03:41:03.002412    8948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 03:41:03.002417    8948 sshutil.go:53] new ssh client: &{IP:localhost Port:51434 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/id_rsa Username:docker}
	I0729 03:41:03.085064    8948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 03:41:03.090178    8948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 03:41:03.090219    8948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:41:03.093994    8948 api_server.go:72] duration metric: took 111.960584ms to wait for apiserver process to appear ...
	I0729 03:41:03.094002    8948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 03:41:03.094009    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:03.118661    8948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 03:41:03.146056    8948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 03:41:06.412142    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:08.095997    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:08.096023    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:11.414308    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:11.414416    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:41:11.427140    8811 logs.go:276] 1 containers: [65ac65a22bea]
	I0729 03:41:11.427214    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:41:11.439230    8811 logs.go:276] 1 containers: [b34a8a6ca4e1]
	I0729 03:41:11.439309    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:41:11.450259    8811 logs.go:276] 4 containers: [f2e71a487c88 84567be55aaf feaa048ca969 5d89100d144a]
	I0729 03:41:11.450334    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:41:11.461159    8811 logs.go:276] 1 containers: [39391c315068]
	I0729 03:41:11.461221    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:41:11.471645    8811 logs.go:276] 1 containers: [d38acb2d8d16]
	I0729 03:41:11.471704    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:41:11.481879    8811 logs.go:276] 1 containers: [570798ebd35a]
	I0729 03:41:11.481945    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:41:11.491897    8811 logs.go:276] 0 containers: []
	W0729 03:41:11.491910    8811 logs.go:278] No container was found matching "kindnet"
	I0729 03:41:11.491968    8811 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:41:11.502845    8811 logs.go:276] 1 containers: [700ed4f4c0c6]
	I0729 03:41:11.502861    8811 logs.go:123] Gathering logs for kubelet ...
	I0729 03:41:11.502866    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 03:41:11.536869    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:41:11.536969    8811 logs.go:138] Found kubelet problem: Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:41:11.538307    8811 logs.go:123] Gathering logs for coredns [feaa048ca969] ...
	I0729 03:41:11.538313    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feaa048ca969"
	I0729 03:41:11.550338    8811 logs.go:123] Gathering logs for kube-scheduler [39391c315068] ...
	I0729 03:41:11.550349    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39391c315068"
	I0729 03:41:11.564939    8811 logs.go:123] Gathering logs for Docker ...
	I0729 03:41:11.564949    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:41:11.589245    8811 logs.go:123] Gathering logs for dmesg ...
	I0729 03:41:11.589271    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:41:11.594008    8811 logs.go:123] Gathering logs for coredns [f2e71a487c88] ...
	I0729 03:41:11.594022    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2e71a487c88"
	I0729 03:41:11.608896    8811 logs.go:123] Gathering logs for coredns [5d89100d144a] ...
	I0729 03:41:11.608908    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d89100d144a"
	I0729 03:41:11.621242    8811 logs.go:123] Gathering logs for kube-proxy [d38acb2d8d16] ...
	I0729 03:41:11.621257    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d38acb2d8d16"
	I0729 03:41:11.633184    8811 logs.go:123] Gathering logs for storage-provisioner [700ed4f4c0c6] ...
	I0729 03:41:11.633195    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 700ed4f4c0c6"
	I0729 03:41:11.646312    8811 logs.go:123] Gathering logs for kube-apiserver [65ac65a22bea] ...
	I0729 03:41:11.646322    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ac65a22bea"
	I0729 03:41:11.660756    8811 logs.go:123] Gathering logs for etcd [b34a8a6ca4e1] ...
	I0729 03:41:11.660769    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b34a8a6ca4e1"
	I0729 03:41:11.674148    8811 logs.go:123] Gathering logs for kube-controller-manager [570798ebd35a] ...
	I0729 03:41:11.674157    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 570798ebd35a"
	I0729 03:41:11.692423    8811 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:41:11.692435    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:41:11.727558    8811 logs.go:123] Gathering logs for coredns [84567be55aaf] ...
	I0729 03:41:11.727572    8811 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84567be55aaf"
	I0729 03:41:11.739404    8811 logs.go:123] Gathering logs for container status ...
	I0729 03:41:11.739414    8811 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:41:11.753294    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:41:11.753305    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 03:41:11.753333    8811 out.go:239] X Problems detected in kubelet:
	W0729 03:41:11.753338    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	W0729 03:41:11.753343    8811 out.go:239]   Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	I0729 03:41:11.753347    8811 out.go:304] Setting ErrFile to fd 2...
	I0729 03:41:11.753351    8811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:41:13.096155    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:13.096192    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:18.096394    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:18.096427    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:21.757262    8811 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:26.759443    8811 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:26.763973    8811 out.go:177] 
	W0729 03:41:26.766936    8811 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 03:41:26.766943    8811 out.go:239] * 
	W0729 03:41:26.767685    8811 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:41:26.779832    8811 out.go:177] 
	I0729 03:41:23.096719    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:23.096779    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:28.097294    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:28.097316    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:33.097825    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:33.097870    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 03:41:33.459337    8948 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 03:41:33.462796    8948 out.go:177] * Enabled addons: storage-provisioner
	I0729 03:41:33.470565    8948 addons.go:510] duration metric: took 30.4891155s for enable addons: enabled=[storage-provisioner]
	I0729 03:41:38.098677    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:38.098710    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-07-29 10:32:34 UTC, ends at Mon 2024-07-29 10:41:42 UTC. --
	Jul 29 10:41:23 running-upgrade-376000 dockerd[2845]: time="2024-07-29T10:41:23.875577814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 10:41:23 running-upgrade-376000 dockerd[2845]: time="2024-07-29T10:41:23.875594354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 10:41:23 running-upgrade-376000 dockerd[2845]: time="2024-07-29T10:41:23.875636226Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/14417ce169633b2dd36065911f06fba8e40ad0bae7e81edc2161bd82869c36d5 pid=17122 runtime=io.containerd.runc.v2
	Jul 29 10:41:24 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:24Z" level=error msg="ContainerStats resp: {0x4000910040 linux}"
	Jul 29 10:41:25 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:25Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 10:41:25 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:25Z" level=error msg="ContainerStats resp: {0x40007d5300 linux}"
	Jul 29 10:41:25 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:25Z" level=error msg="ContainerStats resp: {0x40008689c0 linux}"
	Jul 29 10:41:25 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:25Z" level=error msg="ContainerStats resp: {0x4000868a80 linux}"
	Jul 29 10:41:25 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:25Z" level=error msg="ContainerStats resp: {0x4000868280 linux}"
	Jul 29 10:41:25 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:25Z" level=error msg="ContainerStats resp: {0x40007d4680 linux}"
	Jul 29 10:41:25 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:25Z" level=error msg="ContainerStats resp: {0x40007d4c40 linux}"
	Jul 29 10:41:25 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:25Z" level=error msg="ContainerStats resp: {0x40007d5440 linux}"
	Jul 29 10:41:30 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:30Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 10:41:35 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:35Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 10:41:35 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:35Z" level=error msg="ContainerStats resp: {0x4000359f40 linux}"
	Jul 29 10:41:35 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:35Z" level=error msg="ContainerStats resp: {0x4000910040 linux}"
	Jul 29 10:41:36 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:36Z" level=error msg="ContainerStats resp: {0x40005ced80 linux}"
	Jul 29 10:41:37 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:37Z" level=error msg="ContainerStats resp: {0x400047f700 linux}"
	Jul 29 10:41:37 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:37Z" level=error msg="ContainerStats resp: {0x400047f8c0 linux}"
	Jul 29 10:41:37 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:37Z" level=error msg="ContainerStats resp: {0x40007ae4c0 linux}"
	Jul 29 10:41:37 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:37Z" level=error msg="ContainerStats resp: {0x40007ae600 linux}"
	Jul 29 10:41:37 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:37Z" level=error msg="ContainerStats resp: {0x40007af080 linux}"
	Jul 29 10:41:37 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:37Z" level=error msg="ContainerStats resp: {0x40007af680 linux}"
	Jul 29 10:41:37 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:37Z" level=error msg="ContainerStats resp: {0x40007afa00 linux}"
	Jul 29 10:41:40 running-upgrade-376000 cri-dockerd[2684]: time="2024-07-29T10:41:40Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	b39b219f3b742       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   dd586904956d7
	14417ce169633       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   17c77b6b8fe5e
	f2e71a487c888       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   dd586904956d7
	84567be55aaff       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   17c77b6b8fe5e
	700ed4f4c0c66       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   db825b43d8834
	d38acb2d8d16e       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   7e890aa3bd5b3
	b34a8a6ca4e17       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   569117767f86a
	570798ebd35af       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   6d580591dde7b
	65ac65a22beae       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   b986e6c857bc3
	39391c315068e       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   dff5a9b6e990c
	
	
	==> coredns [14417ce16963] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 602721615905006239.8989924152257428312. HINFO: read udp 10.244.0.2:42142->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 602721615905006239.8989924152257428312. HINFO: read udp 10.244.0.2:37248->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 602721615905006239.8989924152257428312. HINFO: read udp 10.244.0.2:60924->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 602721615905006239.8989924152257428312. HINFO: read udp 10.244.0.2:59093->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 602721615905006239.8989924152257428312. HINFO: read udp 10.244.0.2:43855->10.0.2.3:53: i/o timeout
	
	
	==> coredns [84567be55aaf] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3162238571593144156.7187477736724343197. HINFO: read udp 10.244.0.2:51371->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3162238571593144156.7187477736724343197. HINFO: read udp 10.244.0.2:40885->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3162238571593144156.7187477736724343197. HINFO: read udp 10.244.0.2:33765->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3162238571593144156.7187477736724343197. HINFO: read udp 10.244.0.2:41303->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3162238571593144156.7187477736724343197. HINFO: read udp 10.244.0.2:58603->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3162238571593144156.7187477736724343197. HINFO: read udp 10.244.0.2:38775->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3162238571593144156.7187477736724343197. HINFO: read udp 10.244.0.2:53472->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3162238571593144156.7187477736724343197. HINFO: read udp 10.244.0.2:56688->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3162238571593144156.7187477736724343197. HINFO: read udp 10.244.0.2:36922->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3162238571593144156.7187477736724343197. HINFO: read udp 10.244.0.2:45374->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b39b219f3b74] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7227549092227512656.6428408447061853278. HINFO: read udp 10.244.0.3:36145->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7227549092227512656.6428408447061853278. HINFO: read udp 10.244.0.3:59528->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7227549092227512656.6428408447061853278. HINFO: read udp 10.244.0.3:55068->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7227549092227512656.6428408447061853278. HINFO: read udp 10.244.0.3:34671->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7227549092227512656.6428408447061853278. HINFO: read udp 10.244.0.3:39467->10.0.2.3:53: i/o timeout
	
	
	==> coredns [f2e71a487c88] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4760533629609018731.7986223506103580028. HINFO: read udp 10.244.0.3:42956->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4760533629609018731.7986223506103580028. HINFO: read udp 10.244.0.3:48751->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4760533629609018731.7986223506103580028. HINFO: read udp 10.244.0.3:37590->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4760533629609018731.7986223506103580028. HINFO: read udp 10.244.0.3:47186->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4760533629609018731.7986223506103580028. HINFO: read udp 10.244.0.3:32953->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4760533629609018731.7986223506103580028. HINFO: read udp 10.244.0.3:56610->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4760533629609018731.7986223506103580028. HINFO: read udp 10.244.0.3:53506->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4760533629609018731.7986223506103580028. HINFO: read udp 10.244.0.3:46766->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4760533629609018731.7986223506103580028. HINFO: read udp 10.244.0.3:56166->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-376000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-376000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=running-upgrade-376000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T03_37_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:37:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-376000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:41:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:37:22 +0000   Mon, 29 Jul 2024 10:37:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:37:22 +0000   Mon, 29 Jul 2024 10:37:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:37:22 +0000   Mon, 29 Jul 2024 10:37:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:37:22 +0000   Mon, 29 Jul 2024 10:37:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-376000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 b3c6f7f6fa1241b4bc2bbcee601f48dd
	  System UUID:                b3c6f7f6fa1241b4bc2bbcee601f48dd
	  Boot ID:                    b0b3e6bc-ae16-4eeb-a134-27160b4f752a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9pk7n                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m8s
	  kube-system                 coredns-6d4b75cb6d-hw9nx                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m8s
	  kube-system                 etcd-running-upgrade-376000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-apiserver-running-upgrade-376000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-controller-manager-running-upgrade-376000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-proxy-9fpnx                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-scheduler-running-upgrade-376000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m7s   kube-proxy       
	  Normal  NodeReady                4m21s  kubelet          Node running-upgrade-376000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s  kubelet          Node running-upgrade-376000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s  kubelet          Node running-upgrade-376000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s  kubelet          Node running-upgrade-376000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m21s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m9s   node-controller  Node running-upgrade-376000 event: Registered Node running-upgrade-376000 in Controller
	
	
	==> dmesg <==
	[  +1.714524] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.082721] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.078982] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.140070] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.083708] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.075524] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.298233] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +9.151627] systemd-fstab-generator[1926]: Ignoring "noauto" for root device
	[Jul29 10:33] systemd-fstab-generator[2203]: Ignoring "noauto" for root device
	[  +0.194127] systemd-fstab-generator[2244]: Ignoring "noauto" for root device
	[  +0.096319] systemd-fstab-generator[2255]: Ignoring "noauto" for root device
	[  +0.098644] systemd-fstab-generator[2268]: Ignoring "noauto" for root device
	[  +1.525866] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.103694] systemd-fstab-generator[2641]: Ignoring "noauto" for root device
	[  +0.081455] systemd-fstab-generator[2652]: Ignoring "noauto" for root device
	[  +0.079993] systemd-fstab-generator[2663]: Ignoring "noauto" for root device
	[  +0.087482] systemd-fstab-generator[2677]: Ignoring "noauto" for root device
	[  +2.307332] systemd-fstab-generator[2830]: Ignoring "noauto" for root device
	[  +2.303283] systemd-fstab-generator[3198]: Ignoring "noauto" for root device
	[  +1.306276] systemd-fstab-generator[3525]: Ignoring "noauto" for root device
	[ +17.960665] kauditd_printk_skb: 68 callbacks suppressed
	[Jul29 10:37] kauditd_printk_skb: 25 callbacks suppressed
	[  +1.554762] systemd-fstab-generator[11571]: Ignoring "noauto" for root device
	[  +5.645319] systemd-fstab-generator[12174]: Ignoring "noauto" for root device
	[  +0.454850] systemd-fstab-generator[12311]: Ignoring "noauto" for root device
	
	
	==> etcd [b34a8a6ca4e1] <==
	{"level":"info","ts":"2024-07-29T10:37:17.794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-29T10:37:17.794Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-29T10:37:17.796Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T10:37:17.796Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T10:37:17.796Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T10:37:17.796Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T10:37:17.796Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T10:37:17.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T10:37:17.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T10:37:17.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-29T10:37:17.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T10:37:17.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T10:37:17.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T10:37:17.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T10:37:17.862Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T10:37:17.864Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T10:37:17.864Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T10:37:17.864Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T10:37:17.864Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-376000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T10:37:17.864Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T10:37:17.865Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T10:37:17.865Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-29T10:37:17.874Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T10:37:17.874Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T10:37:17.874Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:41:43 up 9 min,  0 users,  load average: 0.12, 0.31, 0.17
	Linux running-upgrade-376000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [65ac65a22bea] <==
	I0729 10:37:19.556373       1 cache.go:39] Caches are synced for autoregister controller
	I0729 10:37:19.556380       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0729 10:37:19.556388       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 10:37:19.556395       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0729 10:37:19.556400       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 10:37:19.586068       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0729 10:37:19.615462       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0729 10:37:20.295037       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0729 10:37:20.460460       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0729 10:37:20.463434       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0729 10:37:20.463458       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 10:37:20.604969       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 10:37:20.618642       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 10:37:20.725164       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0729 10:37:20.727493       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0729 10:37:20.727846       1 controller.go:611] quota admission added evaluator for: endpoints
	I0729 10:37:20.729191       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 10:37:21.592229       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0729 10:37:22.308923       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0729 10:37:22.314463       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0729 10:37:22.318839       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0729 10:37:22.364968       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 10:37:35.420692       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0729 10:37:35.621168       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0729 10:37:35.968398       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [570798ebd35a] <==
	I0729 10:37:34.719497       1 shared_informer.go:262] Caches are synced for ephemeral
	I0729 10:37:34.719536       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0729 10:37:34.719565       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0729 10:37:34.719574       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0729 10:37:34.719583       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0729 10:37:34.719592       1 shared_informer.go:262] Caches are synced for attach detach
	I0729 10:37:34.719762       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0729 10:37:34.722477       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0729 10:37:34.723256       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0729 10:37:34.724306       1 shared_informer.go:262] Caches are synced for endpoint
	I0729 10:37:34.752202       1 shared_informer.go:262] Caches are synced for persistent volume
	I0729 10:37:34.756626       1 shared_informer.go:262] Caches are synced for PVC protection
	I0729 10:37:34.769544       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0729 10:37:34.769552       1 shared_informer.go:262] Caches are synced for disruption
	I0729 10:37:34.769557       1 disruption.go:371] Sending events to api server.
	I0729 10:37:34.778394       1 shared_informer.go:262] Caches are synced for deployment
	I0729 10:37:34.923702       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 10:37:34.924787       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 10:37:35.336231       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 10:37:35.399097       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 10:37:35.399122       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0729 10:37:35.423295       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9fpnx"
	I0729 10:37:35.622637       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0729 10:37:35.725681       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-hw9nx"
	I0729 10:37:35.731423       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-9pk7n"
	
	
	==> kube-proxy [d38acb2d8d16] <==
	I0729 10:37:35.927489       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0729 10:37:35.927595       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0729 10:37:35.927634       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0729 10:37:35.965202       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0729 10:37:35.965213       1 server_others.go:206] "Using iptables Proxier"
	I0729 10:37:35.965283       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0729 10:37:35.965430       1 server.go:661] "Version info" version="v1.24.1"
	I0729 10:37:35.965439       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:37:35.966290       1 config.go:444] "Starting node config controller"
	I0729 10:37:35.966321       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0729 10:37:35.966489       1 config.go:226] "Starting endpoint slice config controller"
	I0729 10:37:35.966510       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0729 10:37:35.966542       1 config.go:317] "Starting service config controller"
	I0729 10:37:35.966559       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0729 10:37:36.066521       1 shared_informer.go:262] Caches are synced for node config
	I0729 10:37:36.066535       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0729 10:37:36.067583       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [39391c315068] <==
	W0729 10:37:19.517724       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 10:37:19.518146       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 10:37:19.517735       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 10:37:19.518303       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 10:37:19.518337       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 10:37:19.518445       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 10:37:19.517770       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 10:37:19.517748       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 10:37:19.518565       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 10:37:19.517791       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 10:37:19.518623       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 10:37:19.517803       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 10:37:19.518658       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 10:37:19.517813       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 10:37:19.518699       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 10:37:19.517824       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 10:37:19.518730       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 10:37:19.518756       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 10:37:19.517781       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 10:37:19.518815       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 10:37:20.370603       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 10:37:20.370692       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 10:37:20.530336       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 10:37:20.530406       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0729 10:37:20.716363       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-07-29 10:32:34 UTC, ends at Mon 2024-07-29 10:41:43 UTC. --
	Jul 29 10:37:23 running-upgrade-376000 kubelet[12180]: I0729 10:37:23.793117   12180 reconciler.go:157] "Reconciler: start to sync state"
	Jul 29 10:37:23 running-upgrade-376000 kubelet[12180]: E0729 10:37:23.944701   12180 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-376000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-376000"
	Jul 29 10:37:24 running-upgrade-376000 kubelet[12180]: E0729 10:37:24.145660   12180 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-376000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-376000"
	Jul 29 10:37:24 running-upgrade-376000 kubelet[12180]: E0729 10:37:24.341960   12180 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-376000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-376000"
	Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: I0729 10:37:34.652910   12180 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: W0729 10:37:34.655605   12180 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: E0729 10:37:34.655627   12180 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-376000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-376000' and this object
	Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: I0729 10:37:34.703376   12180 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: I0729 10:37:34.703454   12180 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/67ec8e56-8964-40af-a024-10596cb290f0-tmp\") pod \"storage-provisioner\" (UID: \"67ec8e56-8964-40af-a024-10596cb290f0\") " pod="kube-system/storage-provisioner"
	Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: I0729 10:37:34.703472   12180 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82z8p\" (UniqueName: \"kubernetes.io/projected/67ec8e56-8964-40af-a024-10596cb290f0-kube-api-access-82z8p\") pod \"storage-provisioner\" (UID: \"67ec8e56-8964-40af-a024-10596cb290f0\") " pod="kube-system/storage-provisioner"
	Jul 29 10:37:34 running-upgrade-376000 kubelet[12180]: I0729 10:37:34.703758   12180 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 10:37:35 running-upgrade-376000 kubelet[12180]: I0729 10:37:35.426333   12180 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 10:37:35 running-upgrade-376000 kubelet[12180]: I0729 10:37:35.510928   12180 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d420a88a-e43a-4599-9a98-1b3ea92178a0-kube-proxy\") pod \"kube-proxy-9fpnx\" (UID: \"d420a88a-e43a-4599-9a98-1b3ea92178a0\") " pod="kube-system/kube-proxy-9fpnx"
	Jul 29 10:37:35 running-upgrade-376000 kubelet[12180]: I0729 10:37:35.511045   12180 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d420a88a-e43a-4599-9a98-1b3ea92178a0-lib-modules\") pod \"kube-proxy-9fpnx\" (UID: \"d420a88a-e43a-4599-9a98-1b3ea92178a0\") " pod="kube-system/kube-proxy-9fpnx"
	Jul 29 10:37:35 running-upgrade-376000 kubelet[12180]: I0729 10:37:35.511065   12180 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69599\" (UniqueName: \"kubernetes.io/projected/d420a88a-e43a-4599-9a98-1b3ea92178a0-kube-api-access-69599\") pod \"kube-proxy-9fpnx\" (UID: \"d420a88a-e43a-4599-9a98-1b3ea92178a0\") " pod="kube-system/kube-proxy-9fpnx"
	Jul 29 10:37:35 running-upgrade-376000 kubelet[12180]: I0729 10:37:35.511080   12180 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d420a88a-e43a-4599-9a98-1b3ea92178a0-xtables-lock\") pod \"kube-proxy-9fpnx\" (UID: \"d420a88a-e43a-4599-9a98-1b3ea92178a0\") " pod="kube-system/kube-proxy-9fpnx"
	Jul 29 10:37:35 running-upgrade-376000 kubelet[12180]: I0729 10:37:35.727135   12180 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 10:37:35 running-upgrade-376000 kubelet[12180]: I0729 10:37:35.737168   12180 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 10:37:35 running-upgrade-376000 kubelet[12180]: I0729 10:37:35.812952   12180 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83619489-69ee-4e6a-a210-627276f7c9dd-config-volume\") pod \"coredns-6d4b75cb6d-hw9nx\" (UID: \"83619489-69ee-4e6a-a210-627276f7c9dd\") " pod="kube-system/coredns-6d4b75cb6d-hw9nx"
	Jul 29 10:37:35 running-upgrade-376000 kubelet[12180]: I0729 10:37:35.813040   12180 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jb6s\" (UniqueName: \"kubernetes.io/projected/83619489-69ee-4e6a-a210-627276f7c9dd-kube-api-access-2jb6s\") pod \"coredns-6d4b75cb6d-hw9nx\" (UID: \"83619489-69ee-4e6a-a210-627276f7c9dd\") " pod="kube-system/coredns-6d4b75cb6d-hw9nx"
	Jul 29 10:37:35 running-upgrade-376000 kubelet[12180]: I0729 10:37:35.813080   12180 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2tj5\" (UniqueName: \"kubernetes.io/projected/8e71a7de-1941-4c5b-9d79-41dbb21acb98-kube-api-access-t2tj5\") pod \"coredns-6d4b75cb6d-9pk7n\" (UID: \"8e71a7de-1941-4c5b-9d79-41dbb21acb98\") " pod="kube-system/coredns-6d4b75cb6d-9pk7n"
	Jul 29 10:37:35 running-upgrade-376000 kubelet[12180]: I0729 10:37:35.813093   12180 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e71a7de-1941-4c5b-9d79-41dbb21acb98-config-volume\") pod \"coredns-6d4b75cb6d-9pk7n\" (UID: \"8e71a7de-1941-4c5b-9d79-41dbb21acb98\") " pod="kube-system/coredns-6d4b75cb6d-9pk7n"
	Jul 29 10:37:36 running-upgrade-376000 kubelet[12180]: I0729 10:37:36.491844   12180 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="17c77b6b8fe5ef325549cb1f067f7484581ef9df94249dd73ad31f5c276553f4"
	Jul 29 10:41:24 running-upgrade-376000 kubelet[12180]: I0729 10:41:24.701668   12180 scope.go:110] "RemoveContainer" containerID="5d89100d144a181681b36ad382f2e7e3f1a6e7e47f895060e1ef7fe71155a607"
	Jul 29 10:41:24 running-upgrade-376000 kubelet[12180]: I0729 10:41:24.726718   12180 scope.go:110] "RemoveContainer" containerID="feaa048ca969af027967b55e6a6b78ee15a1feb2f15bc1c41b1182c87f63dbef"
	
	
	==> storage-provisioner [700ed4f4c0c6] <==
	I0729 10:37:36.051763       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 10:37:36.056127       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 10:37:36.056191       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 10:37:36.059418       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 10:37:36.059515       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-376000_21a917bd-11d8-41de-857f-213102a867e5!
	I0729 10:37:36.059566       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"12d557be-09fd-47f9-ae35-650395b5cd7d", APIVersion:"v1", ResourceVersion:"357", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-376000_21a917bd-11d8-41de-857f-213102a867e5 became leader
	I0729 10:37:36.160300       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-376000_21a917bd-11d8-41de-857f-213102a867e5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-376000 -n running-upgrade-376000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-376000 -n running-upgrade-376000: exit status 2 (15.748412083s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-376000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-376000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-376000
--- FAIL: TestRunningBinaryUpgrade (588.93s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.27s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-520000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-520000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.732933084s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-520000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-520000" primary control-plane node in "kubernetes-upgrade-520000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-520000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:35:10.367114    8873 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:35:10.367250    8873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:35:10.367253    8873 out.go:304] Setting ErrFile to fd 2...
	I0729 03:35:10.367259    8873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:35:10.367385    8873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:35:10.368423    8873 out.go:298] Setting JSON to false
	I0729 03:35:10.385477    8873 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5679,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:35:10.385547    8873 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:35:10.391374    8873 out.go:177] * [kubernetes-upgrade-520000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:35:10.395517    8873 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:35:10.395568    8873 notify.go:220] Checking for updates...
	I0729 03:35:10.405555    8873 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:35:10.412480    8873 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:35:10.416472    8873 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:35:10.420591    8873 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:35:10.423515    8873 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:35:10.426904    8873 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:35:10.426978    8873 config.go:182] Loaded profile config "running-upgrade-376000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:35:10.427019    8873 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:35:10.430508    8873 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:35:10.437472    8873 start.go:297] selected driver: qemu2
	I0729 03:35:10.437478    8873 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:35:10.437483    8873 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:35:10.439744    8873 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:35:10.443547    8873 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:35:10.446610    8873 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 03:35:10.446653    8873 cni.go:84] Creating CNI manager for ""
	I0729 03:35:10.446663    8873 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 03:35:10.446691    8873 start.go:340] cluster config:
	{Name:kubernetes-upgrade-520000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:35:10.450117    8873 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:35:10.458380    8873 out.go:177] * Starting "kubernetes-upgrade-520000" primary control-plane node in "kubernetes-upgrade-520000" cluster
	I0729 03:35:10.462554    8873 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:35:10.462571    8873 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 03:35:10.462583    8873 cache.go:56] Caching tarball of preloaded images
	I0729 03:35:10.462654    8873 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:35:10.462660    8873 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 03:35:10.462734    8873 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/kubernetes-upgrade-520000/config.json ...
	I0729 03:35:10.462749    8873 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/kubernetes-upgrade-520000/config.json: {Name:mk858332f6e4d7364fcdf8e936259a7f8bd115b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:35:10.463086    8873 start.go:360] acquireMachinesLock for kubernetes-upgrade-520000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:35:10.463119    8873 start.go:364] duration metric: took 26.125µs to acquireMachinesLock for "kubernetes-upgrade-520000"
	I0729 03:35:10.463130    8873 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:35:10.463170    8873 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:35:10.465074    8873 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:35:10.481102    8873 start.go:159] libmachine.API.Create for "kubernetes-upgrade-520000" (driver="qemu2")
	I0729 03:35:10.481129    8873 client.go:168] LocalClient.Create starting
	I0729 03:35:10.481189    8873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:35:10.481218    8873 main.go:141] libmachine: Decoding PEM data...
	I0729 03:35:10.481228    8873 main.go:141] libmachine: Parsing certificate...
	I0729 03:35:10.481270    8873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:35:10.481293    8873 main.go:141] libmachine: Decoding PEM data...
	I0729 03:35:10.481301    8873 main.go:141] libmachine: Parsing certificate...
	I0729 03:35:10.481628    8873 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:35:10.633963    8873 main.go:141] libmachine: Creating SSH key...
	I0729 03:35:10.670646    8873 main.go:141] libmachine: Creating Disk image...
	I0729 03:35:10.670652    8873 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:35:10.670867    8873 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/disk.qcow2
	I0729 03:35:10.680509    8873 main.go:141] libmachine: STDOUT: 
	I0729 03:35:10.680528    8873 main.go:141] libmachine: STDERR: 
	I0729 03:35:10.680580    8873 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/disk.qcow2 +20000M
	I0729 03:35:10.688820    8873 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:35:10.688835    8873 main.go:141] libmachine: STDERR: 
	I0729 03:35:10.688850    8873 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/disk.qcow2
	I0729 03:35:10.688855    8873 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:35:10.688865    8873 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:35:10.688900    8873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:38:bb:09:cd:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/disk.qcow2
	I0729 03:35:10.690543    8873 main.go:141] libmachine: STDOUT: 
	I0729 03:35:10.690559    8873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:35:10.690578    8873 client.go:171] duration metric: took 209.445833ms to LocalClient.Create
	I0729 03:35:12.692669    8873 start.go:128] duration metric: took 2.22952625s to createHost
	I0729 03:35:12.692745    8873 start.go:83] releasing machines lock for "kubernetes-upgrade-520000", held for 2.229662167s
	W0729 03:35:12.692782    8873 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:35:12.708316    8873 out.go:177] * Deleting "kubernetes-upgrade-520000" in qemu2 ...
	W0729 03:35:12.731186    8873 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:35:12.731205    8873 start.go:729] Will try again in 5 seconds ...
	I0729 03:35:17.733435    8873 start.go:360] acquireMachinesLock for kubernetes-upgrade-520000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:35:17.734062    8873 start.go:364] duration metric: took 483.833µs to acquireMachinesLock for "kubernetes-upgrade-520000"
	I0729 03:35:17.734194    8873 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:35:17.734393    8873 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:35:17.738010    8873 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:35:17.783706    8873 start.go:159] libmachine.API.Create for "kubernetes-upgrade-520000" (driver="qemu2")
	I0729 03:35:17.783757    8873 client.go:168] LocalClient.Create starting
	I0729 03:35:17.783880    8873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:35:17.783946    8873 main.go:141] libmachine: Decoding PEM data...
	I0729 03:35:17.783963    8873 main.go:141] libmachine: Parsing certificate...
	I0729 03:35:17.784033    8873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:35:17.784078    8873 main.go:141] libmachine: Decoding PEM data...
	I0729 03:35:17.784089    8873 main.go:141] libmachine: Parsing certificate...
	I0729 03:35:17.784728    8873 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:35:17.944419    8873 main.go:141] libmachine: Creating SSH key...
	I0729 03:35:17.998760    8873 main.go:141] libmachine: Creating Disk image...
	I0729 03:35:17.998769    8873 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:35:17.998995    8873 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/disk.qcow2
	I0729 03:35:18.008312    8873 main.go:141] libmachine: STDOUT: 
	I0729 03:35:18.008332    8873 main.go:141] libmachine: STDERR: 
	I0729 03:35:18.008386    8873 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/disk.qcow2 +20000M
	I0729 03:35:18.016327    8873 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:35:18.016350    8873 main.go:141] libmachine: STDERR: 
	I0729 03:35:18.016366    8873 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/disk.qcow2
	I0729 03:35:18.016370    8873 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:35:18.016378    8873 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:35:18.016420    8873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:52:a1:35:2c:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/disk.qcow2
	I0729 03:35:18.018077    8873 main.go:141] libmachine: STDOUT: 
	I0729 03:35:18.018108    8873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:35:18.018121    8873 client.go:171] duration metric: took 234.363833ms to LocalClient.Create
	I0729 03:35:20.020444    8873 start.go:128] duration metric: took 2.28605775s to createHost
	I0729 03:35:20.020531    8873 start.go:83] releasing machines lock for "kubernetes-upgrade-520000", held for 2.286487s
	W0729 03:35:20.020856    8873 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-520000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-520000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:35:20.035369    8873 out.go:177] 
	W0729 03:35:20.039564    8873 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:35:20.039591    8873 out.go:239] * 
	* 
	W0729 03:35:20.042245    8873 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:35:20.055561    8873 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-520000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-520000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-520000: (3.122408167s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-520000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-520000 status --format={{.Host}}: exit status 7 (45.384666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-520000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-520000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.188193875s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-520000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-520000" primary control-plane node in "kubernetes-upgrade-520000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-520000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-520000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:35:23.268593    8910 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:35:23.268722    8910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:35:23.268726    8910 out.go:304] Setting ErrFile to fd 2...
	I0729 03:35:23.268729    8910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:35:23.268863    8910 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:35:23.269914    8910 out.go:298] Setting JSON to false
	I0729 03:35:23.286133    8910 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5692,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:35:23.286206    8910 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:35:23.291607    8910 out.go:177] * [kubernetes-upgrade-520000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:35:23.299582    8910 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:35:23.299640    8910 notify.go:220] Checking for updates...
	I0729 03:35:23.307506    8910 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:35:23.311519    8910 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:35:23.315577    8910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:35:23.318559    8910 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:35:23.321566    8910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:35:23.324944    8910 config.go:182] Loaded profile config "kubernetes-upgrade-520000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 03:35:23.325226    8910 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:35:23.329593    8910 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:35:23.336617    8910 start.go:297] selected driver: qemu2
	I0729 03:35:23.336624    8910 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-520000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:35:23.336696    8910 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:35:23.339099    8910 cni.go:84] Creating CNI manager for ""
	I0729 03:35:23.339115    8910 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:35:23.339143    8910 start.go:340] cluster config:
	{Name:kubernetes-upgrade-520000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-520000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:35:23.342638    8910 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:35:23.351580    8910 out.go:177] * Starting "kubernetes-upgrade-520000" primary control-plane node in "kubernetes-upgrade-520000" cluster
	I0729 03:35:23.355498    8910 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 03:35:23.355513    8910 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 03:35:23.355524    8910 cache.go:56] Caching tarball of preloaded images
	I0729 03:35:23.355597    8910 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:35:23.355604    8910 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 03:35:23.355667    8910 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/kubernetes-upgrade-520000/config.json ...
	I0729 03:35:23.355954    8910 start.go:360] acquireMachinesLock for kubernetes-upgrade-520000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:35:23.355981    8910 start.go:364] duration metric: took 21.042µs to acquireMachinesLock for "kubernetes-upgrade-520000"
	I0729 03:35:23.355990    8910 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:35:23.355997    8910 fix.go:54] fixHost starting: 
	I0729 03:35:23.356116    8910 fix.go:112] recreateIfNeeded on kubernetes-upgrade-520000: state=Stopped err=<nil>
	W0729 03:35:23.356127    8910 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:35:23.359628    8910 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-520000" ...
	I0729 03:35:23.367584    8910 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:35:23.367634    8910 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:52:a1:35:2c:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/disk.qcow2
	I0729 03:35:23.369677    8910 main.go:141] libmachine: STDOUT: 
	I0729 03:35:23.369696    8910 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:35:23.369726    8910 fix.go:56] duration metric: took 13.729125ms for fixHost
	I0729 03:35:23.369731    8910 start.go:83] releasing machines lock for "kubernetes-upgrade-520000", held for 13.746125ms
	W0729 03:35:23.369735    8910 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:35:23.369772    8910 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:35:23.369776    8910 start.go:729] Will try again in 5 seconds ...
	I0729 03:35:28.371898    8910 start.go:360] acquireMachinesLock for kubernetes-upgrade-520000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:35:28.372422    8910 start.go:364] duration metric: took 415.25µs to acquireMachinesLock for "kubernetes-upgrade-520000"
	I0729 03:35:28.372495    8910 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:35:28.372511    8910 fix.go:54] fixHost starting: 
	I0729 03:35:28.373123    8910 fix.go:112] recreateIfNeeded on kubernetes-upgrade-520000: state=Stopped err=<nil>
	W0729 03:35:28.373145    8910 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:35:28.375716    8910 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-520000" ...
	I0729 03:35:28.383575    8910 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:35:28.383798    8910 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:52:a1:35:2c:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubernetes-upgrade-520000/disk.qcow2
	I0729 03:35:28.393365    8910 main.go:141] libmachine: STDOUT: 
	I0729 03:35:28.393422    8910 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:35:28.393511    8910 fix.go:56] duration metric: took 20.994292ms for fixHost
	I0729 03:35:28.393543    8910 start.go:83] releasing machines lock for "kubernetes-upgrade-520000", held for 21.10075ms
	W0729 03:35:28.393695    8910 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-520000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-520000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:35:28.401524    8910 out.go:177] 
	W0729 03:35:28.404609    8910 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:35:28.404626    8910 out.go:239] * 
	* 
	W0729 03:35:28.406717    8910 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:35:28.415605    8910 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-520000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-520000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-520000 version --output=json: exit status 1 (63.779292ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-520000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-29 03:35:28.494089 -0700 PDT m=+968.895424668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-520000 -n kubernetes-upgrade-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-520000 -n kubernetes-upgrade-520000: exit status 7 (34.219042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-520000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-520000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-520000
--- FAIL: TestKubernetesUpgrade (18.27s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.4s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19337
- KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2465574385/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.40s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.33s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19337
- KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current204586534/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1917998818 start -p stopped-upgrade-590000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1917998818 start -p stopped-upgrade-590000 --memory=2200 --vm-driver=qemu2 : (40.345433125s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1917998818 -p stopped-upgrade-590000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1917998818 -p stopped-upgrade-590000 stop: (12.116631375s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-590000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-590000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.14985s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-590000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-590000" primary control-plane node in "stopped-upgrade-590000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-590000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:36:22.058552    8948 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:36:22.058755    8948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:36:22.058759    8948 out.go:304] Setting ErrFile to fd 2...
	I0729 03:36:22.058763    8948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:36:22.058919    8948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:36:22.060199    8948 out.go:298] Setting JSON to false
	I0729 03:36:22.080524    8948 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5751,"bootTime":1722243631,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:36:22.080591    8948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:36:22.085781    8948 out.go:177] * [stopped-upgrade-590000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:36:22.093804    8948 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:36:22.093844    8948 notify.go:220] Checking for updates...
	I0729 03:36:22.110324    8948 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:36:22.113773    8948 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:36:22.117744    8948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:36:22.120845    8948 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:36:22.123747    8948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:36:22.126986    8948 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:36:22.130759    8948 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 03:36:22.133694    8948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:36:22.137745    8948 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:36:22.143711    8948 start.go:297] selected driver: qemu2
	I0729 03:36:22.143716    8948 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51469 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 03:36:22.143769    8948 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:36:22.146619    8948 cni.go:84] Creating CNI manager for ""
	I0729 03:36:22.146637    8948 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:36:22.146658    8948 start.go:340] cluster config:
	{Name:stopped-upgrade-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51469 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 03:36:22.146709    8948 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:36:22.154750    8948 out.go:177] * Starting "stopped-upgrade-590000" primary control-plane node in "stopped-upgrade-590000" cluster
	I0729 03:36:22.157762    8948 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 03:36:22.157779    8948 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 03:36:22.157786    8948 cache.go:56] Caching tarball of preloaded images
	I0729 03:36:22.157841    8948 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:36:22.157846    8948 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 03:36:22.157898    8948 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/config.json ...
	I0729 03:36:22.158369    8948 start.go:360] acquireMachinesLock for stopped-upgrade-590000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:36:22.158402    8948 start.go:364] duration metric: took 26.917µs to acquireMachinesLock for "stopped-upgrade-590000"
	I0729 03:36:22.158411    8948 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:36:22.158415    8948 fix.go:54] fixHost starting: 
	I0729 03:36:22.158518    8948 fix.go:112] recreateIfNeeded on stopped-upgrade-590000: state=Stopped err=<nil>
	W0729 03:36:22.158526    8948 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:36:22.162725    8948 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-590000" ...
	I0729 03:36:22.170750    8948 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:36:22.170807    8948 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51434-:22,hostfwd=tcp::51435-:2376,hostname=stopped-upgrade-590000 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/disk.qcow2
	I0729 03:36:22.219151    8948 main.go:141] libmachine: STDOUT: 
	I0729 03:36:22.219182    8948 main.go:141] libmachine: STDERR: 
	I0729 03:36:22.219189    8948 main.go:141] libmachine: Waiting for VM to start (ssh -p 51434 docker@127.0.0.1)...
	I0729 03:36:42.040781    8948 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/config.json ...
	I0729 03:36:42.041132    8948 machine.go:94] provisionDockerMachine start ...
	I0729 03:36:42.041216    8948 main.go:141] libmachine: Using SSH client type: native
	I0729 03:36:42.041457    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1027caa10] 0x1027cd270 <nil>  [] 0s} localhost 51434 <nil> <nil>}
	I0729 03:36:42.041464    8948 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 03:36:42.113969    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 03:36:42.113998    8948 buildroot.go:166] provisioning hostname "stopped-upgrade-590000"
	I0729 03:36:42.114088    8948 main.go:141] libmachine: Using SSH client type: native
	I0729 03:36:42.114273    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1027caa10] 0x1027cd270 <nil>  [] 0s} localhost 51434 <nil> <nil>}
	I0729 03:36:42.114283    8948 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-590000 && echo "stopped-upgrade-590000" | sudo tee /etc/hostname
	I0729 03:36:42.181715    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-590000
	
	I0729 03:36:42.181770    8948 main.go:141] libmachine: Using SSH client type: native
	I0729 03:36:42.181887    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1027caa10] 0x1027cd270 <nil>  [] 0s} localhost 51434 <nil> <nil>}
	I0729 03:36:42.181895    8948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-590000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-590000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-590000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 03:36:42.243490    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 03:36:42.243506    8948 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19337-6349/.minikube CaCertPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19337-6349/.minikube}
	I0729 03:36:42.243515    8948 buildroot.go:174] setting up certificates
	I0729 03:36:42.243520    8948 provision.go:84] configureAuth start
	I0729 03:36:42.243526    8948 provision.go:143] copyHostCerts
	I0729 03:36:42.243611    8948 exec_runner.go:144] found /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.pem, removing ...
	I0729 03:36:42.243618    8948 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.pem
	I0729 03:36:42.243848    8948 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.pem (1082 bytes)
	I0729 03:36:42.244057    8948 exec_runner.go:144] found /Users/jenkins/minikube-integration/19337-6349/.minikube/cert.pem, removing ...
	I0729 03:36:42.244061    8948 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19337-6349/.minikube/cert.pem
	I0729 03:36:42.244127    8948 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19337-6349/.minikube/cert.pem (1123 bytes)
	I0729 03:36:42.244243    8948 exec_runner.go:144] found /Users/jenkins/minikube-integration/19337-6349/.minikube/key.pem, removing ...
	I0729 03:36:42.244246    8948 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19337-6349/.minikube/key.pem
	I0729 03:36:42.244297    8948 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19337-6349/.minikube/key.pem (1679 bytes)
	I0729 03:36:42.244379    8948 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-590000 san=[127.0.0.1 localhost minikube stopped-upgrade-590000]
	I0729 03:36:42.395932    8948 provision.go:177] copyRemoteCerts
	I0729 03:36:42.395978    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 03:36:42.395987    8948 sshutil.go:53] new ssh client: &{IP:localhost Port:51434 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/id_rsa Username:docker}
	I0729 03:36:42.431028    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 03:36:42.437578    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 03:36:42.444119    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 03:36:42.451416    8948 provision.go:87] duration metric: took 207.895459ms to configureAuth
	I0729 03:36:42.451426    8948 buildroot.go:189] setting minikube options for container-runtime
	I0729 03:36:42.451544    8948 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:36:42.451582    8948 main.go:141] libmachine: Using SSH client type: native
	I0729 03:36:42.451673    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1027caa10] 0x1027cd270 <nil>  [] 0s} localhost 51434 <nil> <nil>}
	I0729 03:36:42.451678    8948 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 03:36:42.508671    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 03:36:42.508681    8948 buildroot.go:70] root file system type: tmpfs
	I0729 03:36:42.508735    8948 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 03:36:42.508798    8948 main.go:141] libmachine: Using SSH client type: native
	I0729 03:36:42.508919    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1027caa10] 0x1027cd270 <nil>  [] 0s} localhost 51434 <nil> <nil>}
	I0729 03:36:42.508956    8948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 03:36:42.573358    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 03:36:42.573416    8948 main.go:141] libmachine: Using SSH client type: native
	I0729 03:36:42.573542    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1027caa10] 0x1027cd270 <nil>  [] 0s} localhost 51434 <nil> <nil>}
	I0729 03:36:42.573554    8948 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 03:36:42.933619    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 03:36:42.933634    8948 machine.go:97] duration metric: took 892.512917ms to provisionDockerMachine
	I0729 03:36:42.933640    8948 start.go:293] postStartSetup for "stopped-upgrade-590000" (driver="qemu2")
	I0729 03:36:42.933646    8948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 03:36:42.933701    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 03:36:42.933709    8948 sshutil.go:53] new ssh client: &{IP:localhost Port:51434 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/id_rsa Username:docker}
	I0729 03:36:42.964116    8948 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 03:36:42.965875    8948 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 03:36:42.965884    8948 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19337-6349/.minikube/addons for local assets ...
	I0729 03:36:42.965968    8948 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19337-6349/.minikube/files for local assets ...
	I0729 03:36:42.966115    8948 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19337-6349/.minikube/files/etc/ssl/certs/68432.pem -> 68432.pem in /etc/ssl/certs
	I0729 03:36:42.966242    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 03:36:42.969062    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/files/etc/ssl/certs/68432.pem --> /etc/ssl/certs/68432.pem (1708 bytes)
	I0729 03:36:42.976230    8948 start.go:296] duration metric: took 42.583542ms for postStartSetup
	I0729 03:36:42.976250    8948 fix.go:56] duration metric: took 20.818237709s for fixHost
	I0729 03:36:42.976302    8948 main.go:141] libmachine: Using SSH client type: native
	I0729 03:36:42.976427    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1027caa10] 0x1027cd270 <nil>  [] 0s} localhost 51434 <nil> <nil>}
	I0729 03:36:42.976433    8948 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 03:36:43.037656    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722249403.057969212
	
	I0729 03:36:43.037667    8948 fix.go:216] guest clock: 1722249403.057969212
	I0729 03:36:43.037671    8948 fix.go:229] Guest: 2024-07-29 03:36:43.057969212 -0700 PDT Remote: 2024-07-29 03:36:42.976252 -0700 PDT m=+20.952243084 (delta=81.717212ms)
	I0729 03:36:43.037688    8948 fix.go:200] guest clock delta is within tolerance: 81.717212ms
	I0729 03:36:43.037691    8948 start.go:83] releasing machines lock for "stopped-upgrade-590000", held for 20.879690375s
	I0729 03:36:43.037759    8948 ssh_runner.go:195] Run: cat /version.json
	I0729 03:36:43.037769    8948 sshutil.go:53] new ssh client: &{IP:localhost Port:51434 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/id_rsa Username:docker}
	I0729 03:36:43.037802    8948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 03:36:43.037841    8948 sshutil.go:53] new ssh client: &{IP:localhost Port:51434 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/id_rsa Username:docker}
	W0729 03:36:43.038471    8948 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51434: connect: connection refused
	I0729 03:36:43.038495    8948 retry.go:31] will retry after 312.978852ms: dial tcp [::1]:51434: connect: connection refused
	W0729 03:36:43.068083    8948 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 03:36:43.068163    8948 ssh_runner.go:195] Run: systemctl --version
	I0729 03:36:43.070279    8948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 03:36:43.072075    8948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 03:36:43.072113    8948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 03:36:43.075472    8948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 03:36:43.080900    8948 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 03:36:43.080913    8948 start.go:495] detecting cgroup driver to use...
	I0729 03:36:43.081004    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 03:36:43.090708    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 03:36:43.094045    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 03:36:43.097147    8948 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 03:36:43.097191    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 03:36:43.100421    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 03:36:43.103915    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 03:36:43.107468    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 03:36:43.111000    8948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 03:36:43.114425    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 03:36:43.117596    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 03:36:43.120633    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 03:36:43.123821    8948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 03:36:43.127108    8948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 03:36:43.129932    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:36:43.217774    8948 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 03:36:43.224017    8948 start.go:495] detecting cgroup driver to use...
	I0729 03:36:43.224102    8948 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 03:36:43.230784    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 03:36:43.236316    8948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 03:36:43.245313    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 03:36:43.250195    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 03:36:43.254995    8948 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 03:36:43.312016    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 03:36:43.317530    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 03:36:43.323887    8948 ssh_runner.go:195] Run: which cri-dockerd
	I0729 03:36:43.325566    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 03:36:43.328633    8948 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 03:36:43.334020    8948 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 03:36:43.412838    8948 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 03:36:43.499322    8948 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 03:36:43.499390    8948 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 03:36:43.506173    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:36:43.583853    8948 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 03:36:44.746553    8948 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162706833s)
	I0729 03:36:44.746612    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 03:36:44.752824    8948 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 03:36:44.759906    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 03:36:44.764475    8948 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 03:36:44.845113    8948 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 03:36:44.931817    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:36:45.000491    8948 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 03:36:45.006437    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 03:36:45.010781    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:36:45.081592    8948 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 03:36:45.123263    8948 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 03:36:45.123341    8948 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 03:36:45.125664    8948 start.go:563] Will wait 60s for crictl version
	I0729 03:36:45.125699    8948 ssh_runner.go:195] Run: which crictl
	I0729 03:36:45.126995    8948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 03:36:45.141295    8948 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 03:36:45.141382    8948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 03:36:45.157246    8948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 03:36:45.176981    8948 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 03:36:45.177045    8948 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 03:36:45.178256    8948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 03:36:45.181666    8948 kubeadm.go:883] updating cluster {Name:stopped-upgrade-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51469 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 03:36:45.181715    8948 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 03:36:45.181756    8948 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 03:36:45.192184    8948 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 03:36:45.192193    8948 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 03:36:45.192239    8948 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 03:36:45.195704    8948 ssh_runner.go:195] Run: which lz4
	I0729 03:36:45.197035    8948 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 03:36:45.198313    8948 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 03:36:45.198322    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 03:36:46.140437    8948 docker.go:649] duration metric: took 943.445875ms to copy over tarball
	I0729 03:36:46.140508    8948 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 03:36:47.346137    8948 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.205638s)
	I0729 03:36:47.346151    8948 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 03:36:47.361540    8948 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 03:36:47.364520    8948 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 03:36:47.369926    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:36:47.444539    8948 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 03:36:49.089792    8948 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.645269708s)
	I0729 03:36:49.089899    8948 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 03:36:49.101194    8948 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 03:36:49.101205    8948 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 03:36:49.101210    8948 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 03:36:49.105268    8948 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:36:49.107092    8948 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 03:36:49.108951    8948 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:36:49.109497    8948 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 03:36:49.111811    8948 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 03:36:49.111809    8948 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 03:36:49.113636    8948 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 03:36:49.113672    8948 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 03:36:49.115090    8948 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:36:49.115205    8948 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 03:36:49.124193    8948 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 03:36:49.124211    8948 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 03:36:49.125907    8948 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:36:49.125943    8948 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 03:36:49.126954    8948 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 03:36:49.127841    8948 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 03:36:49.528827    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 03:36:49.529449    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 03:36:49.537710    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 03:36:49.537860    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 03:36:49.540887    8948 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 03:36:49.540916    8948 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 03:36:49.540963    8948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 03:36:49.544521    8948 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 03:36:49.544541    8948 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 03:36:49.544577    8948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 03:36:49.562666    8948 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 03:36:49.562685    8948 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 03:36:49.562721    8948 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 03:36:49.562731    8948 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 03:36:49.562742    8948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 03:36:49.562760    8948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 03:36:49.565449    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 03:36:49.565950    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0729 03:36:49.568102    8948 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 03:36:49.568220    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:36:49.572369    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 03:36:49.587450    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 03:36:49.587464    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 03:36:49.587533    8948 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 03:36:49.587546    8948 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 03:36:49.587587    8948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 03:36:49.592380    8948 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 03:36:49.592410    8948 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:36:49.592464    8948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 03:36:49.593871    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 03:36:49.601572    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 03:36:49.601699    8948 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 03:36:49.610395    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 03:36:49.610419    8948 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 03:36:49.610436    8948 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 03:36:49.610477    8948 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 03:36:49.610511    8948 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 03:36:49.612630    8948 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 03:36:49.612646    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 03:36:49.621251    8948 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 03:36:49.621263    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 03:36:49.629938    8948 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 03:36:49.629964    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 03:36:49.629967    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 03:36:49.630063    8948 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 03:36:49.659589    8948 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0729 03:36:49.659598    8948 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 03:36:49.659628    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0729 03:36:49.717000    8948 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 03:36:49.717013    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0729 03:36:49.750310    8948 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 03:36:49.750416    8948 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:36:49.799177    8948 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 03:36:49.799196    8948 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 03:36:49.799222    8948 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:36:49.799287    8948 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:36:49.829743    8948 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 03:36:49.829868    8948 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 03:36:49.843092    8948 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 03:36:49.843122    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 03:36:49.902265    8948 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 03:36:49.902279    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 03:36:50.254319    8948 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 03:36:50.254343    8948 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 03:36:50.254352    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0729 03:36:50.407046    8948 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 03:36:50.407095    8948 cache_images.go:92] duration metric: took 1.305902292s to LoadCachedImages
	W0729 03:36:50.407156    8948 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 03:36:50.407165    8948 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 03:36:50.407221    8948 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-590000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 03:36:50.407310    8948 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 03:36:50.421363    8948 cni.go:84] Creating CNI manager for ""
	I0729 03:36:50.421375    8948 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:36:50.421380    8948 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 03:36:50.421388    8948 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-590000 NodeName:stopped-upgrade-590000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 03:36:50.421451    8948 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-590000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 03:36:50.421515    8948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 03:36:50.424526    8948 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 03:36:50.424568    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 03:36:50.427706    8948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 03:36:50.432777    8948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 03:36:50.437710    8948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 03:36:50.442714    8948 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 03:36:50.443979    8948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 03:36:50.447556    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:36:50.521752    8948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 03:36:50.530809    8948 certs.go:68] Setting up /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000 for IP: 10.0.2.15
	I0729 03:36:50.530817    8948 certs.go:194] generating shared ca certs ...
	I0729 03:36:50.530827    8948 certs.go:226] acquiring lock for ca certs: {Name:mk5485201dd0b8c49ea299ac713a7956ec13f382 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:36:50.531004    8948 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.key
	I0729 03:36:50.531054    8948 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/proxy-client-ca.key
	I0729 03:36:50.531059    8948 certs.go:256] generating profile certs ...
	I0729 03:36:50.531130    8948 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/client.key
	I0729 03:36:50.531149    8948 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.key.84b808fb
	I0729 03:36:50.531159    8948 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.crt.84b808fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 03:36:50.652465    8948 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.crt.84b808fb ...
	I0729 03:36:50.652480    8948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.crt.84b808fb: {Name:mkba9908a3833f05a0fd05760f672abad4b9cc55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:36:50.652758    8948 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.key.84b808fb ...
	I0729 03:36:50.652763    8948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.key.84b808fb: {Name:mk9759d71abedb9e6737f26ae1e02520ea933ac2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:36:50.652903    8948 certs.go:381] copying /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.crt.84b808fb -> /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.crt
	I0729 03:36:50.653036    8948 certs.go:385] copying /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.key.84b808fb -> /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.key
	I0729 03:36:50.653195    8948 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/proxy-client.key
	I0729 03:36:50.653323    8948 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/6843.pem (1338 bytes)
	W0729 03:36:50.653353    8948 certs.go:480] ignoring /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/6843_empty.pem, impossibly tiny 0 bytes
	I0729 03:36:50.653358    8948 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 03:36:50.653384    8948 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem (1082 bytes)
	I0729 03:36:50.653411    8948 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem (1123 bytes)
	I0729 03:36:50.653439    8948 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/key.pem (1679 bytes)
	I0729 03:36:50.653490    8948 certs.go:484] found cert: /Users/jenkins/minikube-integration/19337-6349/.minikube/files/etc/ssl/certs/68432.pem (1708 bytes)
	I0729 03:36:50.653816    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 03:36:50.660869    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 03:36:50.667915    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 03:36:50.675484    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 03:36:50.683045    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 03:36:50.690222    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 03:36:50.697140    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 03:36:50.703974    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 03:36:50.711427    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/6843.pem --> /usr/share/ca-certificates/6843.pem (1338 bytes)
	I0729 03:36:50.718027    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/files/etc/ssl/certs/68432.pem --> /usr/share/ca-certificates/68432.pem (1708 bytes)
	I0729 03:36:50.724592    8948 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19337-6349/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 03:36:50.731638    8948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 03:36:50.736735    8948 ssh_runner.go:195] Run: openssl version
	I0729 03:36:50.738599    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6843.pem && ln -fs /usr/share/ca-certificates/6843.pem /etc/ssl/certs/6843.pem"
	I0729 03:36:50.741509    8948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6843.pem
	I0729 03:36:50.742831    8948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:20 /usr/share/ca-certificates/6843.pem
	I0729 03:36:50.742851    8948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6843.pem
	I0729 03:36:50.744639    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6843.pem /etc/ssl/certs/51391683.0"
	I0729 03:36:50.747858    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/68432.pem && ln -fs /usr/share/ca-certificates/68432.pem /etc/ssl/certs/68432.pem"
	I0729 03:36:50.751200    8948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/68432.pem
	I0729 03:36:50.752702    8948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:20 /usr/share/ca-certificates/68432.pem
	I0729 03:36:50.752723    8948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/68432.pem
	I0729 03:36:50.754496    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/68432.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 03:36:50.757315    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 03:36:50.760260    8948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 03:36:50.761754    8948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I0729 03:36:50.761769    8948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 03:36:50.763483    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 03:36:50.766503    8948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 03:36:50.767956    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 03:36:50.770550    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 03:36:50.772512    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 03:36:50.774750    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 03:36:50.776660    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 03:36:50.778487    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 03:36:50.780334    8948 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-590000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51469 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-590000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 03:36:50.780418    8948 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 03:36:50.790766    8948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 03:36:50.793798    8948 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 03:36:50.793803    8948 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 03:36:50.793826    8948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 03:36:50.796631    8948 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 03:36:50.796926    8948 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-590000" does not appear in /Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:36:50.797038    8948 kubeconfig.go:62] /Users/jenkins/minikube-integration/19337-6349/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-590000" cluster setting kubeconfig missing "stopped-upgrade-590000" context setting]
	I0729 03:36:50.797229    8948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/kubeconfig: {Name:mk88e6cb321d16f76049e5804261f3b045a9d412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:36:50.797621    8948 kapi.go:59] client config for stopped-upgrade-590000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/client.key", CAFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b60080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 03:36:50.797914    8948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 03:36:50.800589    8948 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-590000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 03:36:50.800597    8948 kubeadm.go:1160] stopping kube-system containers ...
	I0729 03:36:50.800631    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 03:36:50.811291    8948 docker.go:483] Stopping containers: [5ec83535d1f0 0c6f4763c087 6c9e82fc6ad9 2ed58f54ac75 15a008cb819a 5ca831426e6a a5ca2a3a4957 5b0322cd745f]
	I0729 03:36:50.811360    8948 ssh_runner.go:195] Run: docker stop 5ec83535d1f0 0c6f4763c087 6c9e82fc6ad9 2ed58f54ac75 15a008cb819a 5ca831426e6a a5ca2a3a4957 5b0322cd745f
	I0729 03:36:50.821767    8948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 03:36:50.827282    8948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 03:36:50.829980    8948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 03:36:50.829985    8948 kubeadm.go:157] found existing configuration files:
	
	I0729 03:36:50.830009    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/admin.conf
	I0729 03:36:50.832397    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 03:36:50.832417    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 03:36:50.835310    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/kubelet.conf
	I0729 03:36:50.838117    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 03:36:50.838135    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 03:36:50.840570    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/controller-manager.conf
	I0729 03:36:50.843389    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 03:36:50.843414    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 03:36:50.846018    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/scheduler.conf
	I0729 03:36:50.848398    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 03:36:50.848419    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 03:36:50.851351    8948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 03:36:50.854141    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:36:50.876439    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:36:51.584068    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:36:51.714622    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:36:51.738480    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 03:36:51.760997    8948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 03:36:51.761076    8948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:36:52.262424    8948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:36:52.762926    8948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:36:52.767257    8948 api_server.go:72] duration metric: took 1.006281125s to wait for apiserver process to appear ...
	I0729 03:36:52.767268    8948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 03:36:52.767276    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:36:57.769286    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:36:57.769342    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:02.769651    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:02.769706    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:07.770081    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:07.770118    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:12.770714    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:12.770810    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:17.771813    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:17.771838    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:22.772730    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:22.772752    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:27.773955    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:27.774025    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:32.775928    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:32.775951    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:37.777892    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:37.777914    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:42.779255    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:42.779301    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:47.781466    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:47.781490    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:37:52.783605    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:37:52.783769    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:37:52.798720    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:37:52.798804    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:37:52.811337    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:37:52.811412    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:37:52.826343    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:37:52.826418    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:37:52.837106    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:37:52.837179    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:37:52.849089    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:37:52.849162    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:37:52.859254    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:37:52.859332    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:37:52.869267    8948 logs.go:276] 0 containers: []
	W0729 03:37:52.869280    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:37:52.869349    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:37:52.879396    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:37:52.879415    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:37:52.879436    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:37:52.916563    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:37:52.916572    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:37:52.959370    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:37:52.959380    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:37:52.974239    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:37:52.974253    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:37:52.985510    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:37:52.985521    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:37:52.997433    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:37:52.997446    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:37:53.012678    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:37:53.012692    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:37:53.024266    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:37:53.024275    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:37:53.040575    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:37:53.040589    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:37:53.066291    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:37:53.066298    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:37:53.070756    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:37:53.070762    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:37:53.170408    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:37:53.170419    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:37:53.184288    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:37:53.184302    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:37:53.200964    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:37:53.200975    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:37:53.214875    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:37:53.214887    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:37:53.231147    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:37:53.231162    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:37:53.242593    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:37:53.242609    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:37:55.754161    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:00.756398    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:00.756564    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:00.770415    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:00.770503    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:00.782265    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:00.782336    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:00.796900    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:00.796975    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:00.811910    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:00.811987    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:00.822546    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:00.822615    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:00.833181    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:00.833247    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:00.843253    8948 logs.go:276] 0 containers: []
	W0729 03:38:00.843266    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:00.843324    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:00.860083    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:00.860103    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:00.860108    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:00.871648    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:00.871659    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:00.883596    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:00.883606    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:00.896484    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:00.896497    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:00.907321    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:00.907335    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:00.930597    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:00.930604    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:00.942112    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:00.942129    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:00.956616    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:00.956629    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:00.970214    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:00.970224    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:01.009154    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:01.009164    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:01.024424    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:01.024433    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:01.035570    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:01.035580    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:01.053786    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:01.053796    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:01.093829    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:01.093839    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:01.098651    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:01.098657    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:01.114080    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:01.114090    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:01.125621    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:01.125630    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:38:03.664761    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:08.667087    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:08.667205    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:08.681875    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:08.681951    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:08.694131    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:08.694205    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:08.704765    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:08.704832    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:08.715356    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:08.715419    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:08.727886    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:08.727955    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:08.738646    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:08.738707    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:08.748884    8948 logs.go:276] 0 containers: []
	W0729 03:38:08.748896    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:08.748946    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:08.759620    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:08.759638    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:08.759643    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:38:08.795993    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:08.796001    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:08.799904    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:08.799911    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:08.813354    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:08.813366    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:08.825922    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:08.825931    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:08.863178    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:08.863190    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:08.878048    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:08.878062    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:08.893382    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:08.893392    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:08.904893    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:08.904907    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:08.928914    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:08.928920    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:08.943217    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:08.943229    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:08.980777    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:08.980791    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:08.991464    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:08.991476    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:09.009019    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:09.009032    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:09.020054    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:09.020067    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:09.032471    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:09.032484    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:09.044489    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:09.044503    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:11.558977    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:16.561238    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:16.561348    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:16.574273    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:16.574335    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:16.584934    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:16.585006    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:16.595077    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:16.595146    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:16.605362    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:16.605439    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:16.616761    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:16.616830    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:16.627747    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:16.627815    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:16.642133    8948 logs.go:276] 0 containers: []
	W0729 03:38:16.642148    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:16.642204    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:16.652568    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:16.652584    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:16.652589    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:16.676111    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:16.676119    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:38:16.712343    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:16.712350    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:16.747634    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:16.747644    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:16.760461    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:16.760474    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:16.775058    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:16.775068    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:16.786905    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:16.786916    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:16.825847    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:16.825861    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:16.838969    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:16.838979    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:16.853327    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:16.853342    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:16.867751    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:16.867767    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:16.880368    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:16.880382    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:16.897798    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:16.897808    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:16.908953    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:16.908964    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:16.913764    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:16.913771    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:16.927787    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:16.927806    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:16.939548    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:16.939562    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:19.453386    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:24.454474    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:24.454676    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:24.473680    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:24.473772    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:24.490504    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:24.490588    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:24.504283    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:24.504355    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:24.514483    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:24.514566    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:24.525305    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:24.525387    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:24.536061    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:24.536131    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:24.546634    8948 logs.go:276] 0 containers: []
	W0729 03:38:24.546645    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:24.546696    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:24.557716    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:24.557734    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:24.557740    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:24.596477    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:24.596496    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:24.608681    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:24.608696    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:24.635940    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:24.635949    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:24.647772    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:24.647783    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:38:24.685979    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:24.685988    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:24.719719    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:24.719730    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:24.733358    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:24.733371    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:24.744763    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:24.744774    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:24.749375    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:24.749383    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:24.763140    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:24.763149    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:24.774815    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:24.774826    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:24.792541    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:24.792555    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:24.803660    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:24.803669    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:24.817528    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:24.817539    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:24.833261    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:24.833270    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:24.845162    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:24.845172    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:27.361263    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:32.362569    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:32.362784    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:32.382778    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:32.382865    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:32.397090    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:32.397172    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:32.410043    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:32.410113    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:32.423496    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:32.423570    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:32.435915    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:32.435986    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:32.446494    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:32.446561    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:32.461652    8948 logs.go:276] 0 containers: []
	W0729 03:38:32.461670    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:32.461730    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:32.471928    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:32.471949    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:32.471954    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:32.489047    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:32.489057    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:32.504989    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:32.505002    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:32.529180    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:32.529190    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:38:32.568132    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:32.568156    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:32.572611    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:32.572619    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:32.586562    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:32.586575    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:32.597858    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:32.597871    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:32.633664    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:32.633679    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:32.649661    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:32.649673    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:32.661743    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:32.661757    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:32.676526    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:32.676536    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:32.714330    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:32.714340    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:32.728454    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:32.728464    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:32.739278    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:32.739292    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:32.751084    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:32.751094    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:32.762680    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:32.762690    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:35.282307    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:40.282799    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:40.283038    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:40.309243    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:40.309364    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:40.326341    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:40.326424    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:40.339848    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:40.339919    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:40.352511    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:40.352585    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:40.362731    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:40.362799    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:40.372730    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:40.372804    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:40.387958    8948 logs.go:276] 0 containers: []
	W0729 03:38:40.387971    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:40.388027    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:40.403348    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:40.403368    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:40.403374    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:40.414346    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:40.414357    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:40.429765    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:40.429779    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:40.447189    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:40.447202    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:40.451439    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:40.451446    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:40.498948    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:40.498961    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:40.514570    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:40.514583    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:40.553173    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:40.553184    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:40.567742    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:40.567754    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:40.579097    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:40.579109    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:40.590977    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:40.590989    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:40.605235    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:40.605249    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:40.617339    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:40.617351    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:40.640831    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:40.640838    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:40.653564    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:40.653577    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:40.665070    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:40.665080    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:38:40.704001    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:40.704034    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:43.218587    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:48.221003    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:48.221296    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:48.251954    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:48.252084    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:48.271648    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:48.271763    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:48.286236    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:48.286308    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:48.298094    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:48.298170    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:48.308610    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:48.308697    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:48.323662    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:48.323724    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:48.334558    8948 logs.go:276] 0 containers: []
	W0729 03:38:48.334567    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:48.334623    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:48.345549    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:48.345567    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:48.345573    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:48.380130    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:48.380145    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:48.397974    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:48.397988    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:48.422060    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:48.422070    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:48.426332    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:48.426339    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:48.464751    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:48.464764    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:48.479949    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:48.479961    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:48.491343    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:48.491353    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:48.505263    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:48.505278    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:48.518416    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:48.518432    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:48.535375    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:48.535389    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:48.550464    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:48.550474    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:48.567938    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:48.567953    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:48.583606    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:48.583621    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:48.595660    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:48.595671    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:48.607015    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:48.607025    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:38:48.644911    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:48.644919    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:51.159682    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:38:56.162140    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:38:56.162499    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:38:56.194304    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:38:56.194437    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:38:56.212612    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:38:56.212713    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:38:56.226138    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:38:56.226217    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:38:56.239077    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:38:56.239152    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:38:56.249731    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:38:56.249806    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:38:56.260876    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:38:56.260941    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:38:56.271491    8948 logs.go:276] 0 containers: []
	W0729 03:38:56.271504    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:38:56.271565    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:38:56.282467    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:38:56.282487    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:38:56.282493    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:38:56.286773    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:38:56.286782    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:38:56.305616    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:38:56.305630    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:38:56.344922    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:38:56.344950    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:38:56.359506    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:38:56.359516    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:38:56.370782    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:38:56.370793    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:38:56.393527    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:38:56.393533    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:38:56.410413    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:38:56.410426    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:38:56.430948    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:38:56.430958    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:38:56.445206    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:38:56.445218    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:38:56.457796    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:38:56.457810    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:38:56.469544    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:38:56.469554    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:38:56.481607    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:38:56.481619    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:38:56.493837    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:38:56.493848    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:38:56.531303    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:38:56.531314    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:38:56.570890    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:38:56.570903    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:38:56.585359    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:38:56.585370    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:38:59.098856    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:04.101075    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:04.101284    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:04.119980    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:04.120068    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:04.133873    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:04.133952    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:04.145534    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:04.145596    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:04.157221    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:04.157297    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:04.167235    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:04.167306    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:04.177464    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:04.177535    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:04.187834    8948 logs.go:276] 0 containers: []
	W0729 03:39:04.187847    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:04.187902    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:04.198427    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:04.198447    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:04.198452    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:04.209729    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:04.209739    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:04.222241    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:04.222252    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:04.239493    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:04.239503    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:04.250752    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:04.250762    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:04.287612    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:04.287630    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:04.292393    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:04.292403    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:04.329259    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:04.329275    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:04.343435    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:04.343449    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:04.358210    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:04.358225    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:04.371358    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:04.371367    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:04.395577    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:04.395584    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:04.430796    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:04.430807    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:04.443139    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:04.443153    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:04.462279    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:04.462294    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:04.476422    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:04.476435    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:39:04.487819    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:04.487832    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:07.004245    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:12.006555    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:12.006703    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:12.018711    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:12.018792    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:12.034047    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:12.034116    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:12.044709    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:12.044796    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:12.055785    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:12.055856    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:12.066243    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:12.066311    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:12.077247    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:12.077317    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:12.087650    8948 logs.go:276] 0 containers: []
	W0729 03:39:12.087660    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:12.087721    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:12.098384    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:12.098405    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:12.098410    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:12.121504    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:12.121515    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:12.126028    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:12.126036    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:12.162908    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:12.162918    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:12.173882    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:12.173893    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:12.185693    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:12.185704    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:12.204090    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:12.204100    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:12.217050    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:12.217060    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:12.253336    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:12.253344    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:12.274965    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:12.274979    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:12.288661    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:12.288674    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:12.303394    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:12.303404    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:12.315852    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:12.315866    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:12.351972    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:12.351985    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:12.367216    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:12.367227    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:12.379050    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:12.379062    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:12.390255    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:12.390266    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:39:14.904576    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:19.905573    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:19.905785    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:19.923356    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:19.923444    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:19.940971    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:19.941049    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:19.952511    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:19.952581    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:19.962753    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:19.962826    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:19.973209    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:19.973278    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:19.985118    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:19.985208    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:19.995697    8948 logs.go:276] 0 containers: []
	W0729 03:39:19.995710    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:19.995765    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:20.006412    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:20.006427    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:20.006434    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:20.044096    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:20.044111    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:20.061773    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:20.061783    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:20.076110    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:20.076123    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:20.091313    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:20.091326    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:20.103324    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:20.103338    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:20.141370    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:20.141378    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:20.145813    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:20.145819    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:20.157347    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:20.157361    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:20.169443    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:20.169457    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:20.181022    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:20.181031    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:20.194434    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:20.194450    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:39:20.207360    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:20.207371    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:20.230465    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:20.230475    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:20.248176    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:20.248191    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:20.259418    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:20.259430    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:20.298348    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:20.298362    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:22.817252    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:27.819419    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:27.819636    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:27.836122    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:27.836220    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:27.848783    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:27.848847    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:27.860228    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:27.860290    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:27.870531    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:27.870601    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:27.880932    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:27.880999    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:27.891511    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:27.891577    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:27.901403    8948 logs.go:276] 0 containers: []
	W0729 03:39:27.901416    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:27.901471    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:27.912155    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:27.912173    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:27.912178    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:27.929956    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:27.929966    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:39:27.941193    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:27.941206    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:27.978377    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:27.978386    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:28.012489    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:28.012501    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:28.026749    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:28.026761    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:28.030968    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:28.030975    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:28.044336    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:28.044346    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:28.056458    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:28.056468    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:28.068270    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:28.068280    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:28.080415    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:28.080425    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:28.091879    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:28.091889    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:28.129390    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:28.129402    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:28.142971    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:28.142981    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:28.157620    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:28.157630    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:28.172699    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:28.172708    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:28.195406    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:28.195413    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:30.708341    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:35.710639    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:35.710825    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:35.736977    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:35.737078    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:35.751796    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:35.751864    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:35.765670    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:35.765742    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:35.783618    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:35.783696    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:35.794121    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:35.794182    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:35.806536    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:35.806610    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:35.816582    8948 logs.go:276] 0 containers: []
	W0729 03:39:35.816593    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:35.816649    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:35.826905    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:35.826926    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:35.826931    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:35.851517    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:35.851527    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:35.889638    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:35.889646    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:35.931552    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:35.931563    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:35.977625    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:35.977636    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:35.992939    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:35.992955    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:36.005262    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:36.005272    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:36.016634    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:36.016648    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:36.031403    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:36.031413    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:36.042473    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:36.042487    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:36.057966    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:36.057980    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:39:36.069372    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:36.069383    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:36.082793    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:36.082803    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:36.095788    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:36.095798    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:36.099886    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:36.099893    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:36.112608    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:36.112619    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:36.129493    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:36.129504    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:38.643256    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:43.645480    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:43.645925    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:43.682295    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:43.682438    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:43.702195    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:43.702318    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:43.719911    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:43.719989    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:43.732007    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:43.732075    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:43.742388    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:43.742466    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:43.753578    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:43.753647    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:43.771313    8948 logs.go:276] 0 containers: []
	W0729 03:39:43.771327    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:43.771390    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:43.782442    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:43.782463    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:43.782468    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:43.816981    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:43.816994    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:43.832217    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:43.832228    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:43.847669    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:43.847686    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:39:43.859813    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:43.859827    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:43.878041    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:43.878052    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:43.891625    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:43.891636    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:43.914967    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:43.914975    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:43.926524    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:43.926535    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:43.930593    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:43.930598    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:43.942154    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:43.942164    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:43.954199    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:43.954211    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:43.969897    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:43.969907    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:43.981930    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:43.981941    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:44.020108    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:44.020116    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:44.059055    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:44.059065    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:44.073409    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:44.073423    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:46.587417    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:51.589737    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:51.589961    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:51.609531    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:51.609625    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:51.624252    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:51.624330    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:51.643933    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:51.644012    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:51.655021    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:51.655099    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:51.665478    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:51.665549    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:51.676018    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:51.676094    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:51.686474    8948 logs.go:276] 0 containers: []
	W0729 03:39:51.686485    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:51.686544    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:51.696974    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:51.696991    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:51.696997    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:51.712939    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:51.712959    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:51.728614    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:51.728628    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:51.746906    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:51.746916    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:51.760901    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:51.760911    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:51.799986    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:51.799994    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:51.840445    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:51.840460    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:51.854809    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:51.854820    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:51.869156    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:51.869165    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:39:51.883062    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:51.883073    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:51.900360    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:51.900372    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:51.925965    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:51.925984    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:51.930544    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:51.930554    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:51.968725    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:51.968735    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:51.983179    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:51.983188    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:51.998415    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:51.998426    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:52.014730    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:52.014744    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:54.528521    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:39:59.530860    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:39:59.531102    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:39:59.548870    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:39:59.548955    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:39:59.562670    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:39:59.562748    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:39:59.573528    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:39:59.573595    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:39:59.584307    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:39:59.584376    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:39:59.594444    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:39:59.594510    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:39:59.605062    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:39:59.605128    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:39:59.615959    8948 logs.go:276] 0 containers: []
	W0729 03:39:59.615970    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:39:59.616021    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:39:59.626272    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:39:59.626291    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:39:59.626296    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:39:59.640246    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:39:59.640257    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:39:59.655506    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:39:59.655518    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:39:59.670025    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:39:59.670036    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:39:59.693801    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:39:59.693808    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:39:59.731914    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:39:59.731927    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:39:59.747295    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:39:59.747305    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:39:59.764661    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:39:59.764670    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:39:59.776567    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:39:59.776578    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:39:59.781099    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:39:59.781109    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:39:59.816481    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:39:59.816492    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:39:59.828646    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:39:59.828657    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:39:59.840947    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:39:59.840960    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:39:59.853946    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:39:59.853956    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:39:59.867247    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:39:59.867260    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:39:59.906099    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:39:59.906108    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:39:59.917952    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:39:59.917963    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:40:02.435583    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:07.436933    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:07.437164    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:07.456208    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:40:07.456294    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:07.470418    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:40:07.470496    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:07.483340    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:40:07.483410    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:07.495661    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:40:07.495724    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:07.512556    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:40:07.512627    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:07.523445    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:40:07.523511    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:07.536070    8948 logs.go:276] 0 containers: []
	W0729 03:40:07.536085    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:07.536142    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:07.546314    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:40:07.546330    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:40:07.546336    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:40:07.560889    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:40:07.560900    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:40:07.572688    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:40:07.572699    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:07.584813    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:07.584825    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:07.619572    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:40:07.619587    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:40:07.634167    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:40:07.634178    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:40:07.646959    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:40:07.646972    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:40:07.664551    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:40:07.664562    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:40:07.677549    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:40:07.677558    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:40:07.691137    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:40:07.691147    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:40:07.702591    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:40:07.702603    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:40:07.719565    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:40:07.719575    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:40:07.730544    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:40:07.730556    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:40:07.741807    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:07.741819    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:07.765815    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:07.765824    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:40:07.804555    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:07.804562    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:07.808980    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:40:07.808988    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:40:10.349092    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:15.351194    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:15.351360    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:15.364183    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:40:15.364258    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:15.375038    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:40:15.375109    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:15.385514    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:40:15.385582    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:15.396744    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:40:15.396816    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:15.407104    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:40:15.407169    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:15.417620    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:40:15.417681    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:15.428113    8948 logs.go:276] 0 containers: []
	W0729 03:40:15.428127    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:15.428181    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:15.438650    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:40:15.438669    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:40:15.438674    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:40:15.453352    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:15.453361    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:15.475563    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:15.475569    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:40:15.512452    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:15.512462    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:15.550262    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:40:15.550274    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:40:15.588284    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:15.588296    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:15.593138    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:40:15.593146    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:40:15.607446    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:40:15.607461    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:40:15.622584    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:40:15.622593    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:40:15.635299    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:40:15.635310    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:40:15.646261    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:40:15.646273    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:15.657943    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:40:15.657952    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:40:15.671613    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:40:15.671623    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:40:15.683330    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:40:15.683339    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:40:15.696088    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:40:15.696099    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:40:15.709811    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:40:15.709821    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:40:15.728884    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:40:15.728895    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:40:18.242592    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:23.245222    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:23.245531    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:23.276938    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:40:23.277060    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:23.295278    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:40:23.295358    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:23.309249    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:40:23.309324    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:23.322019    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:40:23.322139    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:23.334083    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:40:23.334156    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:23.346217    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:40:23.346291    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:23.356412    8948 logs.go:276] 0 containers: []
	W0729 03:40:23.356420    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:23.356473    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:23.367211    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:40:23.367230    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:40:23.367235    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:40:23.382015    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:40:23.382024    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:40:23.393991    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:40:23.394000    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:40:23.411347    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:23.411356    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:40:23.449709    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:23.449718    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:23.454092    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:40:23.454097    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:40:23.465693    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:40:23.465703    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:40:23.481542    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:40:23.481551    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:40:23.495091    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:40:23.495104    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:40:23.507526    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:40:23.507537    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:23.519617    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:40:23.519626    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:40:23.557299    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:40:23.557309    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:40:23.571947    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:23.571960    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:23.594596    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:23.594606    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:23.629696    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:40:23.629712    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:40:23.646728    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:40:23.646743    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:40:23.658040    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:40:23.658055    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:40:26.169708    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:31.172030    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:31.172245    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:31.197669    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:40:31.197784    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:31.214406    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:40:31.214487    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:31.227360    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:40:31.227421    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:31.238969    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:40:31.239043    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:31.253505    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:40:31.253568    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:31.264083    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:40:31.264151    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:31.274446    8948 logs.go:276] 0 containers: []
	W0729 03:40:31.274458    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:31.274517    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:31.288291    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:40:31.288308    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:40:31.288313    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:40:31.302840    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:40:31.302851    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:40:31.316164    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:40:31.316174    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:40:31.328193    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:40:31.328206    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:40:31.343890    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:40:31.343900    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:40:31.355533    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:40:31.355545    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:40:31.374843    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:31.374851    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:31.398244    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:31.398252    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:40:31.436313    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:31.436319    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:31.472308    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:31.472322    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:31.477370    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:40:31.477378    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:40:31.491102    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:40:31.491113    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:40:31.504668    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:40:31.504678    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:40:31.518811    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:40:31.518824    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:31.530723    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:40:31.530739    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:40:31.545257    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:40:31.545268    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:40:31.581987    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:40:31.581997    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:40:34.095973    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:39.098558    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:39.098992    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:39.134533    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:40:39.134659    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:39.152410    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:40:39.152500    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:39.166084    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:40:39.166160    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:39.178013    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:40:39.178090    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:39.193723    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:40:39.193798    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:39.222026    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:40:39.222107    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:39.232904    8948 logs.go:276] 0 containers: []
	W0729 03:40:39.232917    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:39.232980    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:39.253131    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:40:39.253150    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:40:39.253155    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:40:39.267495    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:39.267506    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:39.271917    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:39.271923    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:39.306682    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:40:39.306694    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:40:39.321368    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:40:39.321378    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:40:39.333528    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:40:39.333540    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:40:39.347855    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:40:39.347865    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:40:39.367012    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:40:39.367024    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:40:39.380391    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:40:39.380401    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:40:39.392482    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:40:39.392493    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:40:39.404229    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:40:39.404241    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:39.417774    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:39.417785    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:40:39.456703    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:40:39.456715    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:40:39.493705    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:40:39.493717    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:40:39.507167    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:40:39.507180    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:40:39.519003    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:40:39.519014    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:40:39.534378    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:39.534389    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:42.056787    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:47.056955    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:47.057144    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:40:47.079482    8948 logs.go:276] 2 containers: [d5cd4a30fc18 6c9e82fc6ad9]
	I0729 03:40:47.079598    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:40:47.099186    8948 logs.go:276] 2 containers: [c053f31036d8 5ec83535d1f0]
	I0729 03:40:47.099263    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:40:47.111007    8948 logs.go:276] 1 containers: [6be12b02b510]
	I0729 03:40:47.111073    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:40:47.134595    8948 logs.go:276] 2 containers: [e826afc8611d 0c6f4763c087]
	I0729 03:40:47.134659    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:40:47.148542    8948 logs.go:276] 1 containers: [831a0950b89a]
	I0729 03:40:47.148603    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:40:47.160557    8948 logs.go:276] 2 containers: [ddfd1da889f4 2ed58f54ac75]
	I0729 03:40:47.160627    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:40:47.171166    8948 logs.go:276] 0 containers: []
	W0729 03:40:47.171179    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:40:47.171235    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:40:47.187444    8948 logs.go:276] 2 containers: [7a10cf5a7696 0eacfcddf704]
	I0729 03:40:47.187464    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:40:47.187470    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:40:47.200506    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:40:47.200517    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:40:47.205332    8948 logs.go:123] Gathering logs for kube-scheduler [0c6f4763c087] ...
	I0729 03:40:47.205340    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c6f4763c087"
	I0729 03:40:47.221310    8948 logs.go:123] Gathering logs for storage-provisioner [0eacfcddf704] ...
	I0729 03:40:47.221321    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0eacfcddf704"
	I0729 03:40:47.233349    8948 logs.go:123] Gathering logs for kube-controller-manager [2ed58f54ac75] ...
	I0729 03:40:47.233361    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed58f54ac75"
	I0729 03:40:47.249017    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:40:47.249031    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:40:47.283460    8948 logs.go:123] Gathering logs for kube-apiserver [6c9e82fc6ad9] ...
	I0729 03:40:47.283474    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c9e82fc6ad9"
	I0729 03:40:47.323074    8948 logs.go:123] Gathering logs for coredns [6be12b02b510] ...
	I0729 03:40:47.323092    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6be12b02b510"
	I0729 03:40:47.334430    8948 logs.go:123] Gathering logs for storage-provisioner [7a10cf5a7696] ...
	I0729 03:40:47.334441    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a10cf5a7696"
	I0729 03:40:47.346626    8948 logs.go:123] Gathering logs for kube-apiserver [d5cd4a30fc18] ...
	I0729 03:40:47.346635    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cd4a30fc18"
	I0729 03:40:47.360066    8948 logs.go:123] Gathering logs for kube-proxy [831a0950b89a] ...
	I0729 03:40:47.360077    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 831a0950b89a"
	I0729 03:40:47.372489    8948 logs.go:123] Gathering logs for kube-controller-manager [ddfd1da889f4] ...
	I0729 03:40:47.372503    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddfd1da889f4"
	I0729 03:40:47.389423    8948 logs.go:123] Gathering logs for kube-scheduler [e826afc8611d] ...
	I0729 03:40:47.389434    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e826afc8611d"
	I0729 03:40:47.401949    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:40:47.401961    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:40:47.425345    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:40:47.425355    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:40:47.464394    8948 logs.go:123] Gathering logs for etcd [c053f31036d8] ...
	I0729 03:40:47.464410    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c053f31036d8"
	I0729 03:40:47.484388    8948 logs.go:123] Gathering logs for etcd [5ec83535d1f0] ...
	I0729 03:40:47.484400    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec83535d1f0"
	I0729 03:40:50.001690    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:40:55.004127    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:40:55.004259    8948 kubeadm.go:597] duration metric: took 4m4.215185375s to restartPrimaryControlPlane
	W0729 03:40:55.004380    8948 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 03:40:55.004435    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 03:40:56.047114    8948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.042683s)
	I0729 03:40:56.047193    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 03:40:56.052070    8948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 03:40:56.055017    8948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 03:40:56.058047    8948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 03:40:56.058057    8948 kubeadm.go:157] found existing configuration files:
	
	I0729 03:40:56.058102    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/admin.conf
	I0729 03:40:56.060786    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 03:40:56.060829    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 03:40:56.063879    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/kubelet.conf
	I0729 03:40:56.066679    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 03:40:56.066712    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 03:40:56.069744    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/controller-manager.conf
	I0729 03:40:56.072731    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 03:40:56.072771    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 03:40:56.076188    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/scheduler.conf
	I0729 03:40:56.079685    8948 kubeadm.go:163] "https://control-plane.minikube.internal:51469" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51469 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 03:40:56.079716    8948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 03:40:56.082765    8948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 03:40:56.101966    8948 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 03:40:56.102059    8948 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 03:40:56.159420    8948 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 03:40:56.159550    8948 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 03:40:56.159596    8948 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 03:40:56.215937    8948 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 03:40:56.220167    8948 out.go:204]   - Generating certificates and keys ...
	I0729 03:40:56.220222    8948 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 03:40:56.220288    8948 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 03:40:56.220341    8948 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 03:40:56.220418    8948 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 03:40:56.220464    8948 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 03:40:56.220505    8948 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 03:40:56.220543    8948 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 03:40:56.220575    8948 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 03:40:56.220654    8948 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 03:40:56.220701    8948 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 03:40:56.220742    8948 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 03:40:56.220771    8948 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 03:40:56.389581    8948 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 03:40:56.471939    8948 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 03:40:56.631690    8948 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 03:40:56.866916    8948 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 03:40:56.898030    8948 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 03:40:56.898425    8948 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 03:40:56.898486    8948 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 03:40:56.979682    8948 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 03:40:56.983547    8948 out.go:204]   - Booting up control plane ...
	I0729 03:40:56.983599    8948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 03:40:56.983644    8948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 03:40:56.983711    8948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 03:40:56.983753    8948 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 03:40:56.984198    8948 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 03:41:01.488209    8948 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503852 seconds
	I0729 03:41:01.488277    8948 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 03:41:01.491885    8948 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 03:41:01.999675    8948 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 03:41:01.999781    8948 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-590000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 03:41:02.503775    8948 kubeadm.go:310] [bootstrap-token] Using token: k23ilj.fm7zinf82r1k73h9
	I0729 03:41:02.510135    8948 out.go:204]   - Configuring RBAC rules ...
	I0729 03:41:02.510191    8948 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 03:41:02.510242    8948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 03:41:02.516889    8948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 03:41:02.517686    8948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 03:41:02.518481    8948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 03:41:02.519287    8948 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 03:41:02.523451    8948 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 03:41:02.726941    8948 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 03:41:02.907505    8948 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 03:41:02.908027    8948 kubeadm.go:310] 
	I0729 03:41:02.908062    8948 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 03:41:02.908065    8948 kubeadm.go:310] 
	I0729 03:41:02.908101    8948 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 03:41:02.908105    8948 kubeadm.go:310] 
	I0729 03:41:02.908121    8948 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 03:41:02.908151    8948 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 03:41:02.908190    8948 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 03:41:02.908213    8948 kubeadm.go:310] 
	I0729 03:41:02.908272    8948 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 03:41:02.908278    8948 kubeadm.go:310] 
	I0729 03:41:02.908358    8948 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 03:41:02.908362    8948 kubeadm.go:310] 
	I0729 03:41:02.908420    8948 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 03:41:02.908476    8948 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 03:41:02.908533    8948 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 03:41:02.908537    8948 kubeadm.go:310] 
	I0729 03:41:02.908575    8948 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 03:41:02.908615    8948 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 03:41:02.908620    8948 kubeadm.go:310] 
	I0729 03:41:02.908726    8948 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k23ilj.fm7zinf82r1k73h9 \
	I0729 03:41:02.908798    8948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:56da7cbeac47112c1517f3d5f4aec3aafe98daa728e4f5de9707d5d85e63df76 \
	I0729 03:41:02.908808    8948 kubeadm.go:310] 	--control-plane 
	I0729 03:41:02.908810    8948 kubeadm.go:310] 
	I0729 03:41:02.908867    8948 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 03:41:02.908870    8948 kubeadm.go:310] 
	I0729 03:41:02.908992    8948 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k23ilj.fm7zinf82r1k73h9 \
	I0729 03:41:02.909063    8948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:56da7cbeac47112c1517f3d5f4aec3aafe98daa728e4f5de9707d5d85e63df76 
	I0729 03:41:02.909132    8948 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 03:41:02.909142    8948 cni.go:84] Creating CNI manager for ""
	I0729 03:41:02.909164    8948 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:41:02.913424    8948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 03:41:02.920391    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 03:41:02.924058    8948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 03:41:02.928901    8948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 03:41:02.928977    8948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-590000 minikube.k8s.io/updated_at=2024_07_29T03_41_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=stopped-upgrade-590000 minikube.k8s.io/primary=true
	I0729 03:41:02.928981    8948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 03:41:02.962443    8948 kubeadm.go:1113] duration metric: took 33.499667ms to wait for elevateKubeSystemPrivileges
	I0729 03:41:02.981196    8948 ops.go:34] apiserver oom_adj: -16
	I0729 03:41:02.981303    8948 kubeadm.go:394] duration metric: took 4m12.205867791s to StartCluster
	I0729 03:41:02.981318    8948 settings.go:142] acquiring lock: {Name:mk5fe4de5daf4f1a01814785384dc93f95ac574d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:41:02.981407    8948 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:41:02.981809    8948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/kubeconfig: {Name:mk88e6cb321d16f76049e5804261f3b045a9d412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:41:02.982025    8948 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:41:02.982043    8948 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 03:41:02.982083    8948 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-590000"
	I0729 03:41:02.982092    8948 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-590000"
	I0729 03:41:02.982095    8948 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-590000"
	W0729 03:41:02.982144    8948 addons.go:243] addon storage-provisioner should already be in state true
	I0729 03:41:02.982103    8948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-590000"
	I0729 03:41:02.982156    8948 host.go:66] Checking if "stopped-upgrade-590000" exists ...
	I0729 03:41:02.982124    8948 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:41:02.986354    8948 out.go:177] * Verifying Kubernetes components...
	I0729 03:41:02.987095    8948 kapi.go:59] client config for stopped-upgrade-590000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/stopped-upgrade-590000/client.key", CAFile:"/Users/jenkins/minikube-integration/19337-6349/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b60080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 03:41:02.990717    8948 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-590000"
	W0729 03:41:02.990722    8948 addons.go:243] addon default-storageclass should already be in state true
	I0729 03:41:02.990729    8948 host.go:66] Checking if "stopped-upgrade-590000" exists ...
	I0729 03:41:02.991247    8948 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 03:41:02.991252    8948 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 03:41:02.991257    8948 sshutil.go:53] new ssh client: &{IP:localhost Port:51434 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/id_rsa Username:docker}
	I0729 03:41:02.994329    8948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 03:41:02.998439    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 03:41:03.002406    8948 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 03:41:03.002412    8948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 03:41:03.002417    8948 sshutil.go:53] new ssh client: &{IP:localhost Port:51434 SSHKeyPath:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/stopped-upgrade-590000/id_rsa Username:docker}
	I0729 03:41:03.085064    8948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 03:41:03.090178    8948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 03:41:03.090219    8948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:41:03.093994    8948 api_server.go:72] duration metric: took 111.960584ms to wait for apiserver process to appear ...
	I0729 03:41:03.094002    8948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 03:41:03.094009    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:03.118661    8948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 03:41:03.146056    8948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 03:41:08.095997    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:08.096023    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:13.096155    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:13.096192    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:18.096394    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:18.096427    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:23.096719    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:23.096779    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:28.097294    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:28.097316    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:33.097825    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:33.097870    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 03:41:33.459337    8948 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 03:41:33.462796    8948 out.go:177] * Enabled addons: storage-provisioner
	I0729 03:41:33.470565    8948 addons.go:510] duration metric: took 30.4891155s for enable addons: enabled=[storage-provisioner]
	I0729 03:41:38.098677    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:38.098710    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:43.099625    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:43.099646    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:48.100951    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:48.100989    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:53.102601    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:53.102625    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:41:58.104682    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:41:58.104731    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:42:03.107027    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:42:03.107245    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:42:03.136636    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:42:03.136703    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:42:03.148569    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:42:03.148645    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:42:03.159570    8948 logs.go:276] 2 containers: [3aa3da0e32a3 340ef99a6480]
	I0729 03:42:03.159640    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:42:03.169755    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:42:03.169828    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:42:03.180031    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:42:03.180098    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:42:03.190335    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:42:03.190398    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:42:03.200403    8948 logs.go:276] 0 containers: []
	W0729 03:42:03.200413    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:42:03.200460    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:42:03.210838    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:42:03.210851    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:42:03.210856    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:42:03.222629    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:42:03.222639    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:42:03.260425    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:42:03.260437    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:42:03.274681    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:42:03.274695    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:42:03.288276    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:42:03.288287    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:42:03.300358    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:42:03.300371    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:42:03.314976    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:42:03.314989    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:42:03.327811    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:42:03.327829    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:42:03.351534    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:42:03.351543    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:42:03.384515    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:42:03.384521    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:42:03.388988    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:42:03.388995    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:42:03.401051    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:42:03.401061    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:42:03.416269    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:42:03.416279    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:42:05.935504    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:42:10.936482    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:42:10.936875    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:42:10.967968    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:42:10.968088    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:42:10.986459    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:42:10.986545    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:42:11.000022    8948 logs.go:276] 2 containers: [3aa3da0e32a3 340ef99a6480]
	I0729 03:42:11.000087    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:42:11.011651    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:42:11.011714    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:42:11.026943    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:42:11.027004    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:42:11.038228    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:42:11.038297    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:42:11.048362    8948 logs.go:276] 0 containers: []
	W0729 03:42:11.048373    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:42:11.048423    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:42:11.058956    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:42:11.058973    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:42:11.058978    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:42:11.070699    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:42:11.070708    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:42:11.088013    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:42:11.088027    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:42:11.100281    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:42:11.100296    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:42:11.134212    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:42:11.134220    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:42:11.138186    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:42:11.138194    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:42:11.171796    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:42:11.171810    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:42:11.190588    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:42:11.190602    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:42:11.201337    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:42:11.201351    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:42:11.212803    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:42:11.212818    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:42:11.228031    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:42:11.228045    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:42:11.243373    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:42:11.243383    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:42:11.255080    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:42:11.255095    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:42:13.781362    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:42:18.783458    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:42:18.783941    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:42:18.828084    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:42:18.828239    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:42:18.847629    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:42:18.847723    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:42:18.862041    8948 logs.go:276] 2 containers: [3aa3da0e32a3 340ef99a6480]
	I0729 03:42:18.862111    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:42:18.873431    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:42:18.873500    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:42:18.884970    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:42:18.885033    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:42:18.895609    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:42:18.895675    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:42:18.905610    8948 logs.go:276] 0 containers: []
	W0729 03:42:18.905621    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:42:18.905674    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:42:18.916194    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:42:18.916213    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:42:18.916218    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:42:18.931607    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:42:18.931617    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:42:18.942852    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:42:18.942861    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:42:18.960144    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:42:18.960156    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:42:18.993811    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:42:18.993820    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:42:18.997620    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:42:18.997628    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:42:19.031201    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:42:19.031213    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:42:19.045702    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:42:19.045710    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:42:19.060398    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:42:19.060410    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:42:19.076076    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:42:19.076092    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:42:19.088814    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:42:19.088825    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:42:19.101901    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:42:19.101917    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:42:19.127848    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:42:19.127860    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:42:21.641789    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:42:26.644455    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:42:26.644880    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:42:26.684039    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:42:26.684173    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:42:26.705537    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:42:26.705652    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:42:26.723974    8948 logs.go:276] 2 containers: [3aa3da0e32a3 340ef99a6480]
	I0729 03:42:26.724045    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:42:26.736339    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:42:26.736405    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:42:26.747382    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:42:26.747451    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:42:26.758186    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:42:26.758252    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:42:26.768446    8948 logs.go:276] 0 containers: []
	W0729 03:42:26.768459    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:42:26.768509    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:42:26.779484    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:42:26.779503    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:42:26.779508    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:42:26.791720    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:42:26.791730    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:42:26.809159    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:42:26.809169    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:42:26.824588    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:42:26.824601    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:42:26.842223    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:42:26.842234    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:42:26.846726    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:42:26.846735    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:42:26.864582    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:42:26.864595    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:42:26.880801    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:42:26.880813    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:42:26.893370    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:42:26.893385    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:42:26.916438    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:42:26.916446    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:42:26.950200    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:42:26.950207    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:42:26.985232    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:42:26.985245    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:42:26.996826    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:42:26.996838    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:42:29.510044    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:42:34.512316    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:42:34.512584    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:42:34.538914    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:42:34.539027    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:42:34.554168    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:42:34.554247    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:42:34.566254    8948 logs.go:276] 2 containers: [3aa3da0e32a3 340ef99a6480]
	I0729 03:42:34.566329    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:42:34.577190    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:42:34.577257    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:42:34.593689    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:42:34.593759    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:42:34.604122    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:42:34.604191    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:42:34.614481    8948 logs.go:276] 0 containers: []
	W0729 03:42:34.614491    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:42:34.614547    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:42:34.624799    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:42:34.624814    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:42:34.624820    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:42:34.635956    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:42:34.635968    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:42:34.670061    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:42:34.670069    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:42:34.674073    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:42:34.674081    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:42:34.687658    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:42:34.687666    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:42:34.703307    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:42:34.703321    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:42:34.718561    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:42:34.718569    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:42:34.729790    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:42:34.729802    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:42:34.746984    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:42:34.746996    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:42:34.758508    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:42:34.758517    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:42:34.795816    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:42:34.795829    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:42:34.810462    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:42:34.810474    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:42:34.835373    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:42:34.835385    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:42:37.349329    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:42:42.351933    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:42:42.352929    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:42:42.392091    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:42:42.392215    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:42:42.414161    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:42:42.414272    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:42:42.434052    8948 logs.go:276] 2 containers: [3aa3da0e32a3 340ef99a6480]
	I0729 03:42:42.434127    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:42:42.446046    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:42:42.446107    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:42:42.457097    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:42:42.457171    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:42:42.468427    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:42:42.468484    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:42:42.479168    8948 logs.go:276] 0 containers: []
	W0729 03:42:42.479184    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:42:42.479244    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:42:42.489774    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:42:42.489789    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:42:42.489793    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:42:42.501451    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:42:42.501464    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:42:42.505656    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:42:42.505664    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:42:42.540826    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:42:42.540838    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:42:42.556057    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:42:42.556068    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:42:42.569542    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:42:42.569552    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:42:42.581079    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:42:42.581091    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:42:42.598917    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:42:42.598927    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:42:42.623598    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:42:42.623605    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:42:42.635252    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:42:42.635265    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:42:42.671292    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:42:42.671301    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:42:42.686316    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:42:42.686329    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:42:42.699915    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:42:42.699928    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:42:45.214112    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:42:50.216556    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:42:50.216983    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:42:50.257513    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:42:50.257639    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:42:50.280049    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:42:50.280141    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:42:50.295810    8948 logs.go:276] 2 containers: [3aa3da0e32a3 340ef99a6480]
	I0729 03:42:50.295982    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:42:50.308883    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:42:50.308959    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:42:50.319528    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:42:50.319601    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:42:50.330448    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:42:50.330516    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:42:50.340826    8948 logs.go:276] 0 containers: []
	W0729 03:42:50.340840    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:42:50.340893    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:42:50.357893    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:42:50.357909    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:42:50.357913    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:42:50.369479    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:42:50.369489    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:42:50.383682    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:42:50.383692    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:42:50.403250    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:42:50.403261    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:42:50.438229    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:42:50.438237    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:42:50.442278    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:42:50.442286    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:42:50.457362    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:42:50.457374    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:42:50.469693    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:42:50.469702    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:42:50.481460    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:42:50.481469    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:42:50.505241    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:42:50.505248    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:42:50.516818    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:42:50.516828    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:42:50.569755    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:42:50.569765    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:42:50.584073    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:42:50.584084    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:42:53.096278    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:42:58.097282    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:42:58.097631    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:42:58.131450    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:42:58.131579    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:42:58.152086    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:42:58.152183    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:42:58.171193    8948 logs.go:276] 2 containers: [3aa3da0e32a3 340ef99a6480]
	I0729 03:42:58.171261    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:42:58.182916    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:42:58.182987    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:42:58.194435    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:42:58.194507    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:42:58.206046    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:42:58.206115    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:42:58.216827    8948 logs.go:276] 0 containers: []
	W0729 03:42:58.216838    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:42:58.216889    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:42:58.228420    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:42:58.228436    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:42:58.228441    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:42:58.240393    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:42:58.240406    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:42:58.255724    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:42:58.255734    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:42:58.268250    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:42:58.268260    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:42:58.293453    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:42:58.293461    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:42:58.305348    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:42:58.305361    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:42:58.329578    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:42:58.329586    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:42:58.365520    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:42:58.365533    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:42:58.377501    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:42:58.377516    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:42:58.389003    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:42:58.389016    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:42:58.403860    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:42:58.403869    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:42:58.421018    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:42:58.421028    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:42:58.454315    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:42:58.454323    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:43:00.960269    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:43:05.962467    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:43:05.962646    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:43:05.982394    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:43:05.982493    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:43:05.997146    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:43:05.997214    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:43:06.009317    8948 logs.go:276] 2 containers: [3aa3da0e32a3 340ef99a6480]
	I0729 03:43:06.009385    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:43:06.020155    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:43:06.020216    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:43:06.033258    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:43:06.033328    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:43:06.044681    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:43:06.044744    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:43:06.055858    8948 logs.go:276] 0 containers: []
	W0729 03:43:06.055872    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:43:06.055925    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:43:06.066941    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:43:06.066956    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:43:06.066962    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:43:06.090209    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:43:06.090219    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:43:06.125454    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:43:06.125467    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:43:06.139946    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:43:06.139957    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:43:06.152060    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:43:06.152073    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:43:06.164631    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:43:06.164645    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:43:06.176499    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:43:06.176508    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:43:06.192926    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:43:06.192938    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:43:06.210776    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:43:06.210789    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:43:06.223193    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:43:06.223204    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:43:06.259277    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:43:06.259286    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:43:06.263250    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:43:06.263256    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:43:06.277636    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:43:06.277648    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:43:08.791060    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:43:13.793287    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:43:13.793718    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:43:13.833288    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:43:13.833420    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:43:13.857414    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:43:13.857519    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:43:13.873286    8948 logs.go:276] 2 containers: [3aa3da0e32a3 340ef99a6480]
	I0729 03:43:13.873366    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:43:13.893234    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:43:13.893303    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:43:13.905501    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:43:13.905571    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:43:13.916545    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:43:13.916609    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:43:13.929354    8948 logs.go:276] 0 containers: []
	W0729 03:43:13.929365    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:43:13.929418    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:43:13.940611    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:43:13.940626    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:43:13.940632    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:43:13.962323    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:43:13.962339    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:43:13.983687    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:43:13.983700    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:43:14.032133    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:43:14.032157    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:43:14.036864    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:43:14.036876    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:43:14.103634    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:43:14.103646    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:43:14.118537    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:43:14.118550    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:43:14.134960    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:43:14.134974    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:43:14.151687    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:43:14.151698    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:43:14.167355    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:43:14.167369    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:43:14.179350    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:43:14.179365    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:43:14.197021    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:43:14.197033    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:43:14.208536    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:43:14.208545    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:43:16.735562    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:43:21.738223    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:43:21.738589    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:43:21.778930    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:43:21.779049    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:43:21.800560    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:43:21.800654    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:43:21.820329    8948 logs.go:276] 4 containers: [a2e626f83d5b 5ee046eb1929 3aa3da0e32a3 340ef99a6480]
	I0729 03:43:21.820412    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:43:21.832431    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:43:21.832497    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:43:21.842727    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:43:21.842790    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:43:21.853678    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:43:21.853742    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:43:21.867378    8948 logs.go:276] 0 containers: []
	W0729 03:43:21.867387    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:43:21.867436    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:43:21.878015    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:43:21.878032    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:43:21.878037    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:43:21.915692    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:43:21.915708    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:43:21.929602    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:43:21.929613    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:43:21.933773    8948 logs.go:123] Gathering logs for coredns [a2e626f83d5b] ...
	I0729 03:43:21.933780    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2e626f83d5b"
	I0729 03:43:21.944907    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:43:21.944918    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:43:21.959464    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:43:21.959475    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:43:21.977468    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:43:21.977479    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:43:22.002788    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:43:22.002795    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:43:22.017888    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:43:22.017898    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:43:22.031628    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:43:22.031638    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:43:22.043055    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:43:22.043066    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:43:22.077861    8948 logs.go:123] Gathering logs for coredns [5ee046eb1929] ...
	I0729 03:43:22.077871    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee046eb1929"
	I0729 03:43:22.089596    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:43:22.089607    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:43:22.101649    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:43:22.101663    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:43:22.113277    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:43:22.113289    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:43:24.627049    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:43:29.629808    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:43:29.630217    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:43:29.669992    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:43:29.670134    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:43:29.692989    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:43:29.693096    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:43:29.707855    8948 logs.go:276] 4 containers: [a2e626f83d5b 5ee046eb1929 3aa3da0e32a3 340ef99a6480]
	I0729 03:43:29.707927    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:43:29.725504    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:43:29.725569    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:43:29.736515    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:43:29.736580    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:43:29.747164    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:43:29.747228    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:43:29.757534    8948 logs.go:276] 0 containers: []
	W0729 03:43:29.757545    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:43:29.757596    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:43:29.767798    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:43:29.767817    8948 logs.go:123] Gathering logs for coredns [5ee046eb1929] ...
	I0729 03:43:29.767821    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee046eb1929"
	I0729 03:43:29.780522    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:43:29.780535    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:43:29.794471    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:43:29.794480    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:43:29.807007    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:43:29.807021    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:43:29.842586    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:43:29.842595    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:43:29.854679    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:43:29.854693    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:43:29.873249    8948 logs.go:123] Gathering logs for coredns [a2e626f83d5b] ...
	I0729 03:43:29.873260    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2e626f83d5b"
	I0729 03:43:29.888930    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:43:29.888944    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:43:29.904194    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:43:29.904206    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:43:29.916608    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:43:29.916622    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:43:29.930533    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:43:29.930547    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:43:29.965670    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:43:29.965682    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:43:29.980126    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:43:29.980136    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:43:29.991997    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:43:29.992007    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:43:30.018008    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:43:30.018022    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:43:32.522625    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:43:37.524826    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:43:37.525162    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:43:37.554399    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:43:37.554522    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:43:37.577930    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:43:37.578008    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:43:37.591672    8948 logs.go:276] 4 containers: [a2e626f83d5b 5ee046eb1929 3aa3da0e32a3 340ef99a6480]
	I0729 03:43:37.591741    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:43:37.603613    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:43:37.603677    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:43:37.614639    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:43:37.614707    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:43:37.625231    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:43:37.625295    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:43:37.635490    8948 logs.go:276] 0 containers: []
	W0729 03:43:37.635501    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:43:37.635549    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:43:37.645947    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:43:37.645965    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:43:37.645970    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:43:37.671070    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:43:37.671077    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:43:37.675696    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:43:37.675704    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:43:37.690389    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:43:37.690401    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:43:37.705154    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:43:37.705164    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:43:37.716493    8948 logs.go:123] Gathering logs for coredns [5ee046eb1929] ...
	I0729 03:43:37.716506    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee046eb1929"
	I0729 03:43:37.728210    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:43:37.728224    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:43:37.740201    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:43:37.740214    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:43:37.752190    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:43:37.752200    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:43:37.763946    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:43:37.763958    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:43:37.797567    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:43:37.797574    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:43:37.811416    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:43:37.811428    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:43:37.828590    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:43:37.828600    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:43:37.840005    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:43:37.840016    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:43:37.874787    8948 logs.go:123] Gathering logs for coredns [a2e626f83d5b] ...
	I0729 03:43:37.874801    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2e626f83d5b"
	I0729 03:43:40.394112    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:43:45.396667    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:43:45.396720    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:43:45.407564    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:43:45.407639    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:43:45.418628    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:43:45.418688    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:43:45.429554    8948 logs.go:276] 4 containers: [a2e626f83d5b 5ee046eb1929 3aa3da0e32a3 340ef99a6480]
	I0729 03:43:45.429619    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:43:45.440737    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:43:45.440792    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:43:45.452602    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:43:45.452679    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:43:45.464287    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:43:45.464337    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:43:45.475246    8948 logs.go:276] 0 containers: []
	W0729 03:43:45.475258    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:43:45.475306    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:43:45.486270    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:43:45.486285    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:43:45.486290    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:43:45.498086    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:43:45.498096    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:43:45.517803    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:43:45.517812    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:43:45.530033    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:43:45.530042    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:43:45.542182    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:43:45.542191    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:43:45.555553    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:43:45.555562    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:43:45.568572    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:43:45.568581    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:43:45.573144    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:43:45.573152    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:43:45.588869    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:43:45.588888    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:43:45.604480    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:43:45.604497    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:43:45.629721    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:43:45.629737    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:43:45.646903    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:43:45.646921    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:43:45.684925    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:43:45.684946    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:43:45.729224    8948 logs.go:123] Gathering logs for coredns [a2e626f83d5b] ...
	I0729 03:43:45.729235    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2e626f83d5b"
	I0729 03:43:45.744889    8948 logs.go:123] Gathering logs for coredns [5ee046eb1929] ...
	I0729 03:43:45.744904    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee046eb1929"
	I0729 03:43:48.259259    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:43:53.261974    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:43:53.262124    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:43:53.273445    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:43:53.273510    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:43:53.283811    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:43:53.283870    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:43:53.294208    8948 logs.go:276] 4 containers: [a2e626f83d5b 5ee046eb1929 3aa3da0e32a3 340ef99a6480]
	I0729 03:43:53.294273    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:43:53.305163    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:43:53.305228    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:43:53.315119    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:43:53.315177    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:43:53.325713    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:43:53.325781    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:43:53.336026    8948 logs.go:276] 0 containers: []
	W0729 03:43:53.336037    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:43:53.336091    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:43:53.346745    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:43:53.346763    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:43:53.346768    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:43:53.351065    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:43:53.351073    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:43:53.365133    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:43:53.365146    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:43:53.376364    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:43:53.376377    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:43:53.401428    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:43:53.401434    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:43:53.434212    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:43:53.434218    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:43:53.472314    8948 logs.go:123] Gathering logs for coredns [a2e626f83d5b] ...
	I0729 03:43:53.472328    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2e626f83d5b"
	I0729 03:43:53.483940    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:43:53.483952    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:43:53.498252    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:43:53.498264    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:43:53.515680    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:43:53.515690    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:43:53.529794    8948 logs.go:123] Gathering logs for coredns [5ee046eb1929] ...
	I0729 03:43:53.529806    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee046eb1929"
	I0729 03:43:53.541113    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:43:53.541124    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:43:53.553076    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:43:53.553090    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:43:53.564721    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:43:53.564735    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:43:53.575986    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:43:53.575999    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:43:56.087904    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:44:01.089959    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:44:01.090113    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:44:01.102144    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:44:01.102218    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:44:01.116847    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:44:01.116913    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:44:01.128217    8948 logs.go:276] 4 containers: [a2e626f83d5b 5ee046eb1929 3aa3da0e32a3 340ef99a6480]
	I0729 03:44:01.128289    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:44:01.138040    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:44:01.138112    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:44:01.148982    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:44:01.149051    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:44:01.159221    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:44:01.159282    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:44:01.169648    8948 logs.go:276] 0 containers: []
	W0729 03:44:01.169659    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:44:01.169706    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:44:01.180188    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:44:01.180204    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:44:01.180209    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:44:01.191671    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:44:01.191683    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:44:01.215325    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:44:01.215333    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:44:01.236592    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:44:01.236604    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:44:01.271177    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:44:01.271190    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:44:01.285375    8948 logs.go:123] Gathering logs for coredns [a2e626f83d5b] ...
	I0729 03:44:01.285388    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2e626f83d5b"
	I0729 03:44:01.297137    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:44:01.297149    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:44:01.308576    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:44:01.308589    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:44:01.323814    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:44:01.323825    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:44:01.338709    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:44:01.338722    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:44:01.371992    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:44:01.372000    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:44:01.386115    8948 logs.go:123] Gathering logs for coredns [5ee046eb1929] ...
	I0729 03:44:01.386128    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee046eb1929"
	I0729 03:44:01.397926    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:44:01.397938    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:44:01.402529    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:44:01.402536    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:44:01.414205    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:44:01.414218    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:44:03.928064    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:44:08.930778    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:44:08.930888    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:44:08.941879    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:44:08.941940    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:44:08.953362    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:44:08.953416    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:44:08.965268    8948 logs.go:276] 4 containers: [a2e626f83d5b 5ee046eb1929 3aa3da0e32a3 340ef99a6480]
	I0729 03:44:08.965330    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:44:08.976154    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:44:08.976216    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:44:08.987401    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:44:08.987459    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:44:08.998727    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:44:08.998790    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:44:09.011559    8948 logs.go:276] 0 containers: []
	W0729 03:44:09.011569    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:44:09.011609    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:44:09.025549    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:44:09.025568    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:44:09.025573    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:44:09.038181    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:44:09.038190    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:44:09.075643    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:44:09.075654    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:44:09.091171    8948 logs.go:123] Gathering logs for coredns [a2e626f83d5b] ...
	I0729 03:44:09.091185    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2e626f83d5b"
	I0729 03:44:09.103033    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:44:09.103044    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:44:09.127860    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:44:09.127872    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:44:09.140062    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:44:09.140071    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:44:09.153930    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:44:09.153941    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:44:09.158292    8948 logs.go:123] Gathering logs for coredns [5ee046eb1929] ...
	I0729 03:44:09.158303    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee046eb1929"
	I0729 03:44:09.171257    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:44:09.171270    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:44:09.184512    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:44:09.184521    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:44:09.201121    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:44:09.201131    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:44:09.220904    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:44:09.220916    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:44:09.232801    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:44:09.232808    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:44:09.268475    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:44:09.268488    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:44:11.790668    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:44:16.793351    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:44:16.793642    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:44:16.827604    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:44:16.827728    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:44:16.844641    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:44:16.844705    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:44:16.858027    8948 logs.go:276] 4 containers: [a2e626f83d5b 5ee046eb1929 3aa3da0e32a3 340ef99a6480]
	I0729 03:44:16.858085    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:44:16.870034    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:44:16.870096    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:44:16.883312    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:44:16.883395    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:44:16.895513    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:44:16.895578    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:44:16.909259    8948 logs.go:276] 0 containers: []
	W0729 03:44:16.909270    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:44:16.909322    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:44:16.921365    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:44:16.921383    8948 logs.go:123] Gathering logs for coredns [a2e626f83d5b] ...
	I0729 03:44:16.921389    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2e626f83d5b"
	I0729 03:44:16.935227    8948 logs.go:123] Gathering logs for coredns [5ee046eb1929] ...
	I0729 03:44:16.935240    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee046eb1929"
	I0729 03:44:16.949000    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:44:16.949014    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:44:16.962232    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:44:16.962251    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:44:16.975270    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:44:16.975283    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:44:17.011411    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:44:17.011432    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:44:17.024416    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:44:17.024429    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:44:17.039339    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:44:17.039349    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:44:17.058791    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:44:17.058802    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:44:17.070751    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:44:17.070761    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:44:17.075557    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:44:17.075564    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:44:17.111156    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:44:17.111167    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:44:17.125819    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:44:17.125829    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:44:17.139862    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:44:17.139871    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:44:17.159361    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:44:17.159374    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:44:19.687563    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:44:24.689799    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:44:24.690238    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:44:24.730774    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:44:24.730914    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:44:24.753390    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:44:24.753489    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:44:24.768838    8948 logs.go:276] 4 containers: [a2e626f83d5b 5ee046eb1929 3aa3da0e32a3 340ef99a6480]
	I0729 03:44:24.768915    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:44:24.781693    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:44:24.781765    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:44:24.792577    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:44:24.792642    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:44:24.802987    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:44:24.803048    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:44:24.813129    8948 logs.go:276] 0 containers: []
	W0729 03:44:24.813139    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:44:24.813185    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:44:24.823931    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:44:24.823948    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:44:24.823954    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:44:24.838279    8948 logs.go:123] Gathering logs for coredns [5ee046eb1929] ...
	I0729 03:44:24.838292    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee046eb1929"
	I0729 03:44:24.849890    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:44:24.849903    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:44:24.860764    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:44:24.860774    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:44:24.872275    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:44:24.872289    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:44:24.887492    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:44:24.887504    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:44:24.921639    8948 logs.go:123] Gathering logs for coredns [a2e626f83d5b] ...
	I0729 03:44:24.921649    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2e626f83d5b"
	I0729 03:44:24.933955    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:44:24.933965    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:44:24.938278    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:44:24.938284    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:44:24.952581    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:44:24.952593    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:44:24.967092    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:44:24.967105    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:44:24.984926    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:44:24.984936    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:44:24.996432    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:44:24.996445    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:44:25.020256    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:44:25.020266    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:44:25.053917    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:44:25.053925    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:44:27.567299    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:44:32.568797    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:44:32.568856    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:44:32.580612    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:44:32.580670    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:44:32.592844    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:44:32.592895    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:44:32.604762    8948 logs.go:276] 4 containers: [a2e626f83d5b 5ee046eb1929 3aa3da0e32a3 340ef99a6480]
	I0729 03:44:32.604818    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:44:32.615890    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:44:32.615955    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:44:32.627384    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:44:32.627440    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:44:32.639129    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:44:32.639184    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:44:32.649935    8948 logs.go:276] 0 containers: []
	W0729 03:44:32.649944    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:44:32.649993    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:44:32.661184    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:44:32.661205    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:44:32.661211    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:44:32.685376    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:44:32.685396    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:44:32.724158    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:44:32.724171    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:44:32.740386    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:44:32.740398    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:44:32.755811    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:44:32.755821    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:44:32.779757    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:44:32.779769    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:44:32.793915    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:44:32.793924    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:44:32.806810    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:44:32.806819    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:44:32.830819    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:44:32.830827    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:44:32.845718    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:44:32.845728    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:44:32.882913    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:44:32.882934    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:44:32.887718    8948 logs.go:123] Gathering logs for coredns [a2e626f83d5b] ...
	I0729 03:44:32.887728    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2e626f83d5b"
	I0729 03:44:32.901012    8948 logs.go:123] Gathering logs for coredns [5ee046eb1929] ...
	I0729 03:44:32.901023    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee046eb1929"
	I0729 03:44:32.913978    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:44:32.913986    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:44:32.934763    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:44:32.934774    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:44:35.450320    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:44:40.453176    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:44:40.453580    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:44:40.491409    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:44:40.491542    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:44:40.512788    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:44:40.512908    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:44:40.533997    8948 logs.go:276] 4 containers: [a2e626f83d5b 5ee046eb1929 3aa3da0e32a3 340ef99a6480]
	I0729 03:44:40.534072    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:44:40.545822    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:44:40.545887    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:44:40.556078    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:44:40.556142    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:44:40.566326    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:44:40.566389    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:44:40.576856    8948 logs.go:276] 0 containers: []
	W0729 03:44:40.576869    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:44:40.576925    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:44:40.587148    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:44:40.587166    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:44:40.587171    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:44:40.600316    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:44:40.600331    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:44:40.612628    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:44:40.612638    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:44:40.630258    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:44:40.630271    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:44:40.670094    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:44:40.670105    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:44:40.703626    8948 logs.go:123] Gathering logs for coredns [5ee046eb1929] ...
	I0729 03:44:40.703640    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee046eb1929"
	I0729 03:44:40.718877    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:44:40.718891    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:44:40.732581    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:44:40.732592    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:44:40.747140    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:44:40.747151    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:44:40.782389    8948 logs.go:123] Gathering logs for coredns [a2e626f83d5b] ...
	I0729 03:44:40.782397    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2e626f83d5b"
	I0729 03:44:40.794463    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:44:40.794476    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:44:40.806027    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:44:40.806041    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:44:40.830344    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:44:40.830352    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:44:40.835023    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:44:40.835031    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:44:40.852895    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:44:40.852908    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:44:43.366319    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:44:48.368621    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:44:48.368922    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:44:48.410471    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:44:48.410572    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:44:48.429026    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:44:48.429108    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:44:48.443176    8948 logs.go:276] 4 containers: [a2e626f83d5b 5ee046eb1929 3aa3da0e32a3 340ef99a6480]
	I0729 03:44:48.443248    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:44:48.455581    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:44:48.455641    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:44:48.466537    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:44:48.466594    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:44:48.477144    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:44:48.477215    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:44:48.491516    8948 logs.go:276] 0 containers: []
	W0729 03:44:48.491527    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:44:48.491577    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:44:48.501741    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:44:48.501757    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:44:48.501761    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:44:48.535973    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:44:48.535982    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:44:48.539934    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:44:48.539942    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:44:48.557326    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:44:48.557339    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:44:48.569444    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:44:48.569456    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:44:48.581309    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:44:48.581323    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:44:48.595352    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:44:48.595362    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:44:48.606492    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:44:48.606502    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:44:48.629933    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:44:48.629942    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:44:48.664450    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:44:48.664463    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:44:48.678626    8948 logs.go:123] Gathering logs for coredns [a2e626f83d5b] ...
	I0729 03:44:48.678637    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2e626f83d5b"
	I0729 03:44:48.689990    8948 logs.go:123] Gathering logs for coredns [5ee046eb1929] ...
	I0729 03:44:48.690004    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee046eb1929"
	I0729 03:44:48.701075    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:44:48.701087    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:44:48.714997    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:44:48.715010    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:44:48.726760    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:44:48.726772    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:44:51.240410    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:44:56.243058    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:44:56.243114    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 03:44:56.259836    8948 logs.go:276] 1 containers: [64fc6ee550f3]
	I0729 03:44:56.259885    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 03:44:56.273724    8948 logs.go:276] 1 containers: [6093e5fede52]
	I0729 03:44:56.273795    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 03:44:56.285106    8948 logs.go:276] 4 containers: [a2e626f83d5b 5ee046eb1929 3aa3da0e32a3 340ef99a6480]
	I0729 03:44:56.285161    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 03:44:56.295713    8948 logs.go:276] 1 containers: [5ef9a9c7fd53]
	I0729 03:44:56.295768    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 03:44:56.307077    8948 logs.go:276] 1 containers: [6837bd41dff9]
	I0729 03:44:56.307130    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 03:44:56.319816    8948 logs.go:276] 1 containers: [acf4d66deb0a]
	I0729 03:44:56.319870    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 03:44:56.330887    8948 logs.go:276] 0 containers: []
	W0729 03:44:56.330895    8948 logs.go:278] No container was found matching "kindnet"
	I0729 03:44:56.330935    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 03:44:56.341467    8948 logs.go:276] 1 containers: [5ab7b69f939b]
	I0729 03:44:56.341485    8948 logs.go:123] Gathering logs for container status ...
	I0729 03:44:56.341491    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 03:44:56.355671    8948 logs.go:123] Gathering logs for dmesg ...
	I0729 03:44:56.355682    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 03:44:56.361357    8948 logs.go:123] Gathering logs for describe nodes ...
	I0729 03:44:56.361368    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 03:44:56.397828    8948 logs.go:123] Gathering logs for kube-apiserver [64fc6ee550f3] ...
	I0729 03:44:56.397839    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64fc6ee550f3"
	I0729 03:44:56.413854    8948 logs.go:123] Gathering logs for kube-proxy [6837bd41dff9] ...
	I0729 03:44:56.413867    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6837bd41dff9"
	I0729 03:44:56.426910    8948 logs.go:123] Gathering logs for kube-controller-manager [acf4d66deb0a] ...
	I0729 03:44:56.426922    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf4d66deb0a"
	I0729 03:44:56.445791    8948 logs.go:123] Gathering logs for Docker ...
	I0729 03:44:56.445807    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 03:44:56.472077    8948 logs.go:123] Gathering logs for kube-scheduler [5ef9a9c7fd53] ...
	I0729 03:44:56.472096    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ef9a9c7fd53"
	I0729 03:44:56.487631    8948 logs.go:123] Gathering logs for storage-provisioner [5ab7b69f939b] ...
	I0729 03:44:56.487641    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab7b69f939b"
	I0729 03:44:56.501074    8948 logs.go:123] Gathering logs for etcd [6093e5fede52] ...
	I0729 03:44:56.501083    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6093e5fede52"
	I0729 03:44:56.515229    8948 logs.go:123] Gathering logs for coredns [a2e626f83d5b] ...
	I0729 03:44:56.515242    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2e626f83d5b"
	I0729 03:44:56.533860    8948 logs.go:123] Gathering logs for coredns [340ef99a6480] ...
	I0729 03:44:56.533873    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 340ef99a6480"
	I0729 03:44:56.547202    8948 logs.go:123] Gathering logs for kubelet ...
	I0729 03:44:56.547211    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 03:44:56.581600    8948 logs.go:123] Gathering logs for coredns [5ee046eb1929] ...
	I0729 03:44:56.581618    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ee046eb1929"
	I0729 03:44:56.593720    8948 logs.go:123] Gathering logs for coredns [3aa3da0e32a3] ...
	I0729 03:44:56.593728    8948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa3da0e32a3"
	I0729 03:44:59.108013    8948 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 03:45:04.110258    8948 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 03:45:04.115996    8948 out.go:177] 
	W0729 03:45:04.119032    8948 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 03:45:04.119038    8948 out.go:239] * 
	* 
	W0729 03:45:04.119471    8948 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:45:04.134955    8948 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-590000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.69s)

                                                
                                    
x
+
TestPause/serial/Start (9.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-062000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-062000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.842521583s)

                                                
                                                
-- stdout --
	* [pause-062000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-062000" primary control-plane node in "pause-062000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-062000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-062000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-062000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-062000 -n pause-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-062000 -n pause-062000: exit status 7 (33.055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-460000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-460000 --driver=qemu2 : exit status 80 (9.840050041s)

                                                
                                                
-- stdout --
	* [NoKubernetes-460000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-460000" primary control-plane node in "NoKubernetes-460000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-460000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-460000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-460000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-460000 -n NoKubernetes-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-460000 -n NoKubernetes-460000: exit status 7 (32.277791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-460000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-460000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-460000 --no-kubernetes --driver=qemu2 : exit status 80 (5.245308667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-460000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-460000
	* Restarting existing qemu2 VM for "NoKubernetes-460000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-460000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-460000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-460000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-460000 -n NoKubernetes-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-460000 -n NoKubernetes-460000: exit status 7 (65.64575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-460000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-460000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-460000 --no-kubernetes --driver=qemu2 : exit status 80 (5.234798875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-460000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-460000
	* Restarting existing qemu2 VM for "NoKubernetes-460000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-460000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-460000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-460000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-460000 -n NoKubernetes-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-460000 -n NoKubernetes-460000: exit status 7 (46.194042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-460000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-460000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-460000 --driver=qemu2 : exit status 80 (5.241165333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-460000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-460000
	* Restarting existing qemu2 VM for "NoKubernetes-460000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-460000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-460000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-460000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-460000 -n NoKubernetes-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-460000 -n NoKubernetes-460000: exit status 7 (36.074625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-460000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.991121459s)

                                                
                                                
-- stdout --
	* [auto-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-218000" primary control-plane node in "auto-218000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-218000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:43:08.466543    9127 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:43:08.466684    9127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:43:08.466687    9127 out.go:304] Setting ErrFile to fd 2...
	I0729 03:43:08.466690    9127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:43:08.466809    9127 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:43:08.467948    9127 out.go:298] Setting JSON to false
	I0729 03:43:08.484231    9127 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6157,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:43:08.484306    9127 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:43:08.490942    9127 out.go:177] * [auto-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:43:08.497820    9127 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:43:08.497877    9127 notify.go:220] Checking for updates...
	I0729 03:43:08.503794    9127 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:43:08.506767    9127 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:43:08.509811    9127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:43:08.511172    9127 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:43:08.513758    9127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:43:08.517037    9127 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:43:08.517100    9127 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:43:08.517139    9127 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:43:08.518771    9127 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:43:08.525835    9127 start.go:297] selected driver: qemu2
	I0729 03:43:08.525843    9127 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:43:08.525854    9127 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:43:08.528129    9127 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:43:08.531613    9127 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:43:08.534826    9127 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:43:08.534854    9127 cni.go:84] Creating CNI manager for ""
	I0729 03:43:08.534863    9127 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:43:08.534872    9127 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:43:08.534894    9127 start.go:340] cluster config:
	{Name:auto-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:43:08.538254    9127 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:43:08.546773    9127 out.go:177] * Starting "auto-218000" primary control-plane node in "auto-218000" cluster
	I0729 03:43:08.550803    9127 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:43:08.550819    9127 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:43:08.550826    9127 cache.go:56] Caching tarball of preloaded images
	I0729 03:43:08.550888    9127 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:43:08.550894    9127 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:43:08.550966    9127 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/auto-218000/config.json ...
	I0729 03:43:08.550976    9127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/auto-218000/config.json: {Name:mk476eb31fb046991b674574d45c159fe97163bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:43:08.551337    9127 start.go:360] acquireMachinesLock for auto-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:43:08.551365    9127 start.go:364] duration metric: took 23.291µs to acquireMachinesLock for "auto-218000"
	I0729 03:43:08.551376    9127 start.go:93] Provisioning new machine with config: &{Name:auto-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:43:08.551400    9127 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:43:08.555860    9127 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:43:08.570682    9127 start.go:159] libmachine.API.Create for "auto-218000" (driver="qemu2")
	I0729 03:43:08.570712    9127 client.go:168] LocalClient.Create starting
	I0729 03:43:08.570777    9127 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:43:08.570815    9127 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:08.570825    9127 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:08.570862    9127 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:43:08.570887    9127 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:08.570895    9127 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:08.571361    9127 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:43:08.721419    9127 main.go:141] libmachine: Creating SSH key...
	I0729 03:43:08.957418    9127 main.go:141] libmachine: Creating Disk image...
	I0729 03:43:08.957428    9127 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:43:08.957651    9127 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/disk.qcow2
	I0729 03:43:08.967038    9127 main.go:141] libmachine: STDOUT: 
	I0729 03:43:08.967060    9127 main.go:141] libmachine: STDERR: 
	I0729 03:43:08.967113    9127 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/disk.qcow2 +20000M
	I0729 03:43:08.975198    9127 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:43:08.975214    9127 main.go:141] libmachine: STDERR: 
	I0729 03:43:08.975238    9127 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/disk.qcow2
	I0729 03:43:08.975243    9127 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:43:08.975253    9127 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:43:08.975280    9127 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:c5:3c:58:a9:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/disk.qcow2
	I0729 03:43:08.976936    9127 main.go:141] libmachine: STDOUT: 
	I0729 03:43:08.976951    9127 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:43:08.976970    9127 client.go:171] duration metric: took 406.260833ms to LocalClient.Create
	I0729 03:43:10.979031    9127 start.go:128] duration metric: took 2.427665417s to createHost
	I0729 03:43:10.979070    9127 start.go:83] releasing machines lock for "auto-218000", held for 2.427745292s
	W0729 03:43:10.979104    9127 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:43:10.989260    9127 out.go:177] * Deleting "auto-218000" in qemu2 ...
	W0729 03:43:11.011015    9127 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:43:11.011035    9127 start.go:729] Will try again in 5 seconds ...
	I0729 03:43:16.012611    9127 start.go:360] acquireMachinesLock for auto-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:43:16.012740    9127 start.go:364] duration metric: took 96.667µs to acquireMachinesLock for "auto-218000"
	I0729 03:43:16.012765    9127 start.go:93] Provisioning new machine with config: &{Name:auto-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:43:16.012832    9127 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:43:16.021040    9127 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:43:16.036297    9127 start.go:159] libmachine.API.Create for "auto-218000" (driver="qemu2")
	I0729 03:43:16.036326    9127 client.go:168] LocalClient.Create starting
	I0729 03:43:16.036385    9127 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:43:16.036423    9127 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:16.036433    9127 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:16.036462    9127 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:43:16.036484    9127 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:16.036497    9127 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:16.036768    9127 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:43:16.185632    9127 main.go:141] libmachine: Creating SSH key...
	I0729 03:43:16.364658    9127 main.go:141] libmachine: Creating Disk image...
	I0729 03:43:16.364669    9127 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:43:16.364887    9127 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/disk.qcow2
	I0729 03:43:16.374242    9127 main.go:141] libmachine: STDOUT: 
	I0729 03:43:16.374258    9127 main.go:141] libmachine: STDERR: 
	I0729 03:43:16.374304    9127 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/disk.qcow2 +20000M
	I0729 03:43:16.382220    9127 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:43:16.382249    9127 main.go:141] libmachine: STDERR: 
	I0729 03:43:16.382262    9127 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/disk.qcow2
	I0729 03:43:16.382269    9127 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:43:16.382275    9127 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:43:16.382314    9127 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:4a:36:e3:f6:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/auto-218000/disk.qcow2
	I0729 03:43:16.384006    9127 main.go:141] libmachine: STDOUT: 
	I0729 03:43:16.384021    9127 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:43:16.384035    9127 client.go:171] duration metric: took 347.710667ms to LocalClient.Create
	I0729 03:43:18.386210    9127 start.go:128] duration metric: took 2.373385875s to createHost
	I0729 03:43:18.386285    9127 start.go:83] releasing machines lock for "auto-218000", held for 2.37357975s
	W0729 03:43:18.386848    9127 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:43:18.396340    9127 out.go:177] 
	W0729 03:43:18.403597    9127 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:43:18.403626    9127 out.go:239] * 
	* 
	W0729 03:43:18.406284    9127 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:43:18.415532    9127 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.191361791s)

                                                
                                                
-- stdout --
	* [kindnet-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-218000" primary control-plane node in "kindnet-218000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-218000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:43:20.622134    9239 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:43:20.622271    9239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:43:20.622274    9239 out.go:304] Setting ErrFile to fd 2...
	I0729 03:43:20.622277    9239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:43:20.622397    9239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:43:20.623575    9239 out.go:298] Setting JSON to false
	I0729 03:43:20.639830    9239 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6169,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:43:20.639889    9239 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:43:20.644777    9239 out.go:177] * [kindnet-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:43:20.652614    9239 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:43:20.652685    9239 notify.go:220] Checking for updates...
	I0729 03:43:20.660605    9239 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:43:20.664635    9239 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:43:20.667653    9239 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:43:20.670580    9239 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:43:20.673619    9239 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:43:20.676927    9239 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:43:20.676993    9239 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:43:20.677044    9239 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:43:20.680634    9239 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:43:20.687559    9239 start.go:297] selected driver: qemu2
	I0729 03:43:20.687565    9239 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:43:20.687571    9239 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:43:20.689843    9239 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:43:20.694584    9239 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:43:20.697681    9239 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:43:20.697701    9239 cni.go:84] Creating CNI manager for "kindnet"
	I0729 03:43:20.697712    9239 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 03:43:20.697752    9239 start.go:340] cluster config:
	{Name:kindnet-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:43:20.701534    9239 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:43:20.710578    9239 out.go:177] * Starting "kindnet-218000" primary control-plane node in "kindnet-218000" cluster
	I0729 03:43:20.714591    9239 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:43:20.714612    9239 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:43:20.714628    9239 cache.go:56] Caching tarball of preloaded images
	I0729 03:43:20.714692    9239 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:43:20.714698    9239 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:43:20.714768    9239 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/kindnet-218000/config.json ...
	I0729 03:43:20.714784    9239 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/kindnet-218000/config.json: {Name:mkca4d90636ac14c6d9b7eed691675440923b240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:43:20.715005    9239 start.go:360] acquireMachinesLock for kindnet-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:43:20.715038    9239 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "kindnet-218000"
	I0729 03:43:20.715049    9239 start.go:93] Provisioning new machine with config: &{Name:kindnet-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:43:20.715101    9239 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:43:20.722600    9239 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:43:20.739514    9239 start.go:159] libmachine.API.Create for "kindnet-218000" (driver="qemu2")
	I0729 03:43:20.739540    9239 client.go:168] LocalClient.Create starting
	I0729 03:43:20.739608    9239 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:43:20.739640    9239 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:20.739652    9239 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:20.739689    9239 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:43:20.739712    9239 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:20.739721    9239 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:20.740085    9239 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:43:20.922415    9239 main.go:141] libmachine: Creating SSH key...
	I0729 03:43:21.267951    9239 main.go:141] libmachine: Creating Disk image...
	I0729 03:43:21.267962    9239 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:43:21.268212    9239 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/disk.qcow2
	I0729 03:43:21.278074    9239 main.go:141] libmachine: STDOUT: 
	I0729 03:43:21.278091    9239 main.go:141] libmachine: STDERR: 
	I0729 03:43:21.278148    9239 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/disk.qcow2 +20000M
	I0729 03:43:21.286067    9239 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:43:21.286081    9239 main.go:141] libmachine: STDERR: 
	I0729 03:43:21.286095    9239 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/disk.qcow2
	I0729 03:43:21.286100    9239 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:43:21.286113    9239 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:43:21.286151    9239 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:88:19:8c:82:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/disk.qcow2
	I0729 03:43:21.287825    9239 main.go:141] libmachine: STDOUT: 
	I0729 03:43:21.287840    9239 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:43:21.287857    9239 client.go:171] duration metric: took 548.322459ms to LocalClient.Create
	I0729 03:43:23.288994    9239 start.go:128] duration metric: took 2.573922875s to createHost
	I0729 03:43:23.289037    9239 start.go:83] releasing machines lock for "kindnet-218000", held for 2.574042167s
	W0729 03:43:23.289102    9239 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:43:23.294955    9239 out.go:177] * Deleting "kindnet-218000" in qemu2 ...
	W0729 03:43:23.316961    9239 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:43:23.316981    9239 start.go:729] Will try again in 5 seconds ...
	I0729 03:43:28.319058    9239 start.go:360] acquireMachinesLock for kindnet-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:43:28.319348    9239 start.go:364] duration metric: took 245µs to acquireMachinesLock for "kindnet-218000"
	I0729 03:43:28.319429    9239 start.go:93] Provisioning new machine with config: &{Name:kindnet-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:43:28.319574    9239 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:43:28.328995    9239 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:43:28.362832    9239 start.go:159] libmachine.API.Create for "kindnet-218000" (driver="qemu2")
	I0729 03:43:28.362876    9239 client.go:168] LocalClient.Create starting
	I0729 03:43:28.362983    9239 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:43:28.363037    9239 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:28.363059    9239 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:28.363116    9239 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:43:28.363150    9239 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:28.363159    9239 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:28.363599    9239 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:43:28.517067    9239 main.go:141] libmachine: Creating SSH key...
	I0729 03:43:28.724843    9239 main.go:141] libmachine: Creating Disk image...
	I0729 03:43:28.724856    9239 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:43:28.725097    9239 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/disk.qcow2
	I0729 03:43:28.735074    9239 main.go:141] libmachine: STDOUT: 
	I0729 03:43:28.735097    9239 main.go:141] libmachine: STDERR: 
	I0729 03:43:28.735159    9239 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/disk.qcow2 +20000M
	I0729 03:43:28.743236    9239 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:43:28.743258    9239 main.go:141] libmachine: STDERR: 
	I0729 03:43:28.743271    9239 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/disk.qcow2
	I0729 03:43:28.743275    9239 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:43:28.743292    9239 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:43:28.743328    9239 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:88:09:43:67:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kindnet-218000/disk.qcow2
	I0729 03:43:28.745048    9239 main.go:141] libmachine: STDOUT: 
	I0729 03:43:28.745061    9239 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:43:28.745080    9239 client.go:171] duration metric: took 382.204917ms to LocalClient.Create
	I0729 03:43:30.746384    9239 start.go:128] duration metric: took 2.426810042s to createHost
	I0729 03:43:30.746505    9239 start.go:83] releasing machines lock for "kindnet-218000", held for 2.427185542s
	W0729 03:43:30.746898    9239 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:43:30.756287    9239 out.go:177] 
	W0729 03:43:30.763428    9239 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:43:30.763463    9239 out.go:239] * 
	* 
	W0729 03:43:30.765998    9239 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:43:30.775326    9239 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.0282065s)

                                                
                                                
-- stdout --
	* [calico-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-218000" primary control-plane node in "calico-218000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-218000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:43:33.059478    9354 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:43:33.059622    9354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:43:33.059625    9354 out.go:304] Setting ErrFile to fd 2...
	I0729 03:43:33.059628    9354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:43:33.059756    9354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:43:33.060911    9354 out.go:298] Setting JSON to false
	I0729 03:43:33.078925    9354 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6182,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:43:33.079008    9354 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:43:33.084121    9354 out.go:177] * [calico-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:43:33.092074    9354 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:43:33.092099    9354 notify.go:220] Checking for updates...
	I0729 03:43:33.100015    9354 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:43:33.103895    9354 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:43:33.107014    9354 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:43:33.111060    9354 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:43:33.112591    9354 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:43:33.115377    9354 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:43:33.115440    9354 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:43:33.115492    9354 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:43:33.119024    9354 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:43:33.124031    9354 start.go:297] selected driver: qemu2
	I0729 03:43:33.124039    9354 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:43:33.124045    9354 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:43:33.126374    9354 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:43:33.129016    9354 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:43:33.132116    9354 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:43:33.132144    9354 cni.go:84] Creating CNI manager for "calico"
	I0729 03:43:33.132148    9354 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0729 03:43:33.132182    9354 start.go:340] cluster config:
	{Name:calico-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:43:33.135916    9354 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:43:33.144098    9354 out.go:177] * Starting "calico-218000" primary control-plane node in "calico-218000" cluster
	I0729 03:43:33.147981    9354 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:43:33.147993    9354 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:43:33.148001    9354 cache.go:56] Caching tarball of preloaded images
	I0729 03:43:33.148048    9354 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:43:33.148054    9354 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:43:33.148102    9354 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/calico-218000/config.json ...
	I0729 03:43:33.148111    9354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/calico-218000/config.json: {Name:mkffdd1d6eb020907fd3019cc47dddf2bd449314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:43:33.148384    9354 start.go:360] acquireMachinesLock for calico-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:43:33.148415    9354 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "calico-218000"
	I0729 03:43:33.148427    9354 start.go:93] Provisioning new machine with config: &{Name:calico-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:43:33.148452    9354 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:43:33.152081    9354 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:43:33.166726    9354 start.go:159] libmachine.API.Create for "calico-218000" (driver="qemu2")
	I0729 03:43:33.166745    9354 client.go:168] LocalClient.Create starting
	I0729 03:43:33.166805    9354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:43:33.166835    9354 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:33.166845    9354 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:33.166882    9354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:43:33.166905    9354 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:33.166912    9354 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:33.167276    9354 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:43:33.319509    9354 main.go:141] libmachine: Creating SSH key...
	I0729 03:43:33.601292    9354 main.go:141] libmachine: Creating Disk image...
	I0729 03:43:33.601305    9354 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:43:33.601591    9354 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/disk.qcow2
	I0729 03:43:33.611644    9354 main.go:141] libmachine: STDOUT: 
	I0729 03:43:33.611668    9354 main.go:141] libmachine: STDERR: 
	I0729 03:43:33.611723    9354 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/disk.qcow2 +20000M
	I0729 03:43:33.619779    9354 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:43:33.619794    9354 main.go:141] libmachine: STDERR: 
	I0729 03:43:33.619810    9354 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/disk.qcow2
	I0729 03:43:33.619813    9354 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:43:33.619826    9354 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:43:33.619860    9354 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:0a:a7:70:39:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/disk.qcow2
	I0729 03:43:33.621583    9354 main.go:141] libmachine: STDOUT: 
	I0729 03:43:33.621596    9354 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:43:33.621611    9354 client.go:171] duration metric: took 454.870917ms to LocalClient.Create
	I0729 03:43:35.623761    9354 start.go:128] duration metric: took 2.475331875s to createHost
	I0729 03:43:35.623817    9354 start.go:83] releasing machines lock for "calico-218000", held for 2.475440125s
	W0729 03:43:35.623973    9354 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:43:35.630224    9354 out.go:177] * Deleting "calico-218000" in qemu2 ...
	W0729 03:43:35.664092    9354 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:43:35.664127    9354 start.go:729] Will try again in 5 seconds ...
	I0729 03:43:40.666271    9354 start.go:360] acquireMachinesLock for calico-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:43:40.666859    9354 start.go:364] duration metric: took 462.375µs to acquireMachinesLock for "calico-218000"
	I0729 03:43:40.666953    9354 start.go:93] Provisioning new machine with config: &{Name:calico-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:43:40.667321    9354 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:43:40.676627    9354 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:43:40.725888    9354 start.go:159] libmachine.API.Create for "calico-218000" (driver="qemu2")
	I0729 03:43:40.725942    9354 client.go:168] LocalClient.Create starting
	I0729 03:43:40.726070    9354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:43:40.726156    9354 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:40.726174    9354 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:40.726237    9354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:43:40.726283    9354 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:40.726296    9354 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:40.726850    9354 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:43:40.903828    9354 main.go:141] libmachine: Creating SSH key...
	I0729 03:43:40.998502    9354 main.go:141] libmachine: Creating Disk image...
	I0729 03:43:40.998516    9354 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:43:40.998751    9354 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/disk.qcow2
	I0729 03:43:41.008514    9354 main.go:141] libmachine: STDOUT: 
	I0729 03:43:41.008532    9354 main.go:141] libmachine: STDERR: 
	I0729 03:43:41.008582    9354 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/disk.qcow2 +20000M
	I0729 03:43:41.016844    9354 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:43:41.016858    9354 main.go:141] libmachine: STDERR: 
	I0729 03:43:41.016871    9354 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/disk.qcow2
	I0729 03:43:41.016876    9354 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:43:41.016888    9354 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:43:41.016923    9354 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:6c:d0:12:eb:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/calico-218000/disk.qcow2
	I0729 03:43:41.018798    9354 main.go:141] libmachine: STDOUT: 
	I0729 03:43:41.018814    9354 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:43:41.018828    9354 client.go:171] duration metric: took 292.88675ms to LocalClient.Create
	I0729 03:43:43.021048    9354 start.go:128] duration metric: took 2.35371225s to createHost
	I0729 03:43:43.021124    9354 start.go:83] releasing machines lock for "calico-218000", held for 2.354269416s
	W0729 03:43:43.021493    9354 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:43:43.030218    9354 out.go:177] 
	W0729 03:43:43.034261    9354 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:43:43.034281    9354 out.go:239] * 
	* 
	W0729 03:43:43.036050    9354 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:43:43.045120    9354 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.78656075s)

                                                
                                                
-- stdout --
	* [custom-flannel-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-218000" primary control-plane node in "custom-flannel-218000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-218000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:43:45.460450    9472 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:43:45.460577    9472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:43:45.460582    9472 out.go:304] Setting ErrFile to fd 2...
	I0729 03:43:45.460584    9472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:43:45.460719    9472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:43:45.462089    9472 out.go:298] Setting JSON to false
	I0729 03:43:45.480642    9472 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6194,"bootTime":1722243631,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:43:45.480826    9472 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:43:45.484810    9472 out.go:177] * [custom-flannel-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:43:45.491947    9472 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:43:45.491942    9472 notify.go:220] Checking for updates...
	I0729 03:43:45.498848    9472 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:43:45.501883    9472 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:43:45.505877    9472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:43:45.508860    9472 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:43:45.511933    9472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:43:45.515169    9472 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:43:45.515234    9472 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:43:45.515286    9472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:43:45.518773    9472 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:43:45.524779    9472 start.go:297] selected driver: qemu2
	I0729 03:43:45.524790    9472 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:43:45.524796    9472 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:43:45.527326    9472 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:43:45.530825    9472 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:43:45.534930    9472 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:43:45.534953    9472 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0729 03:43:45.534966    9472 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0729 03:43:45.535000    9472 start.go:340] cluster config:
	{Name:custom-flannel-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:43:45.538999    9472 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:43:45.546811    9472 out.go:177] * Starting "custom-flannel-218000" primary control-plane node in "custom-flannel-218000" cluster
	I0729 03:43:45.550718    9472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:43:45.550754    9472 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:43:45.550766    9472 cache.go:56] Caching tarball of preloaded images
	I0729 03:43:45.550869    9472 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:43:45.550875    9472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:43:45.550934    9472 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/custom-flannel-218000/config.json ...
	I0729 03:43:45.550948    9472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/custom-flannel-218000/config.json: {Name:mkbe482f05cbc84ab2753f01769815e7c93339a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:43:45.551271    9472 start.go:360] acquireMachinesLock for custom-flannel-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:43:45.551305    9472 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "custom-flannel-218000"
	I0729 03:43:45.551317    9472 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:43:45.551348    9472 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:43:45.559728    9472 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:43:45.576281    9472 start.go:159] libmachine.API.Create for "custom-flannel-218000" (driver="qemu2")
	I0729 03:43:45.576311    9472 client.go:168] LocalClient.Create starting
	I0729 03:43:45.576394    9472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:43:45.576430    9472 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:45.576440    9472 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:45.576486    9472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:43:45.576509    9472 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:45.576515    9472 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:45.576899    9472 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:43:45.724355    9472 main.go:141] libmachine: Creating SSH key...
	I0729 03:43:45.779673    9472 main.go:141] libmachine: Creating Disk image...
	I0729 03:43:45.779684    9472 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:43:45.779940    9472 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/disk.qcow2
	I0729 03:43:45.789638    9472 main.go:141] libmachine: STDOUT: 
	I0729 03:43:45.789669    9472 main.go:141] libmachine: STDERR: 
	I0729 03:43:45.789720    9472 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/disk.qcow2 +20000M
	I0729 03:43:45.797896    9472 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:43:45.797912    9472 main.go:141] libmachine: STDERR: 
	I0729 03:43:45.797926    9472 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/disk.qcow2
	I0729 03:43:45.797930    9472 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:43:45.797950    9472 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:43:45.797975    9472 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:81:46:0d:a7:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/disk.qcow2
	I0729 03:43:45.799701    9472 main.go:141] libmachine: STDOUT: 
	I0729 03:43:45.799719    9472 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:43:45.799737    9472 client.go:171] duration metric: took 223.425792ms to LocalClient.Create
	I0729 03:43:47.801907    9472 start.go:128] duration metric: took 2.250571333s to createHost
	I0729 03:43:47.801989    9472 start.go:83] releasing machines lock for "custom-flannel-218000", held for 2.250716333s
	W0729 03:43:47.802074    9472 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:43:47.809582    9472 out.go:177] * Deleting "custom-flannel-218000" in qemu2 ...
	W0729 03:43:47.839913    9472 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:43:47.839942    9472 start.go:729] Will try again in 5 seconds ...
	I0729 03:43:52.841476    9472 start.go:360] acquireMachinesLock for custom-flannel-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:43:52.842001    9472 start.go:364] duration metric: took 435.416µs to acquireMachinesLock for "custom-flannel-218000"
	I0729 03:43:52.842203    9472 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:43:52.842525    9472 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:43:52.855942    9472 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:43:52.899339    9472 start.go:159] libmachine.API.Create for "custom-flannel-218000" (driver="qemu2")
	I0729 03:43:52.899389    9472 client.go:168] LocalClient.Create starting
	I0729 03:43:52.899519    9472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:43:52.899598    9472 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:52.899617    9472 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:52.899689    9472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:43:52.899735    9472 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:52.899747    9472 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:52.900278    9472 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:43:53.056761    9472 main.go:141] libmachine: Creating SSH key...
	I0729 03:43:53.160443    9472 main.go:141] libmachine: Creating Disk image...
	I0729 03:43:53.160455    9472 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:43:53.160667    9472 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/disk.qcow2
	I0729 03:43:53.170283    9472 main.go:141] libmachine: STDOUT: 
	I0729 03:43:53.170304    9472 main.go:141] libmachine: STDERR: 
	I0729 03:43:53.170373    9472 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/disk.qcow2 +20000M
	I0729 03:43:53.178508    9472 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:43:53.178525    9472 main.go:141] libmachine: STDERR: 
	I0729 03:43:53.178539    9472 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/disk.qcow2
	I0729 03:43:53.178544    9472 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:43:53.178555    9472 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:43:53.178606    9472 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:b6:84:15:cb:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/custom-flannel-218000/disk.qcow2
	I0729 03:43:53.180317    9472 main.go:141] libmachine: STDOUT: 
	I0729 03:43:53.180334    9472 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:43:53.180348    9472 client.go:171] duration metric: took 280.959958ms to LocalClient.Create
	I0729 03:43:55.182436    9472 start.go:128] duration metric: took 2.33994s to createHost
	I0729 03:43:55.182471    9472 start.go:83] releasing machines lock for "custom-flannel-218000", held for 2.340439042s
	W0729 03:43:55.182738    9472 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:43:55.194037    9472 out.go:177] 
	W0729 03:43:55.197864    9472 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:43:55.197881    9472 out.go:239] * 
	* 
	W0729 03:43:55.198881    9472 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:43:55.206070    9472 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.789834125s)

                                                
                                                
-- stdout --
	* [false-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-218000" primary control-plane node in "false-218000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-218000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:43:57.566720    9589 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:43:57.566861    9589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:43:57.566864    9589 out.go:304] Setting ErrFile to fd 2...
	I0729 03:43:57.566866    9589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:43:57.566999    9589 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:43:57.568057    9589 out.go:298] Setting JSON to false
	I0729 03:43:57.584387    9589 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6206,"bootTime":1722243631,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:43:57.584492    9589 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:43:57.591314    9589 out.go:177] * [false-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:43:57.599298    9589 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:43:57.599364    9589 notify.go:220] Checking for updates...
	I0729 03:43:57.606250    9589 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:43:57.609270    9589 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:43:57.612254    9589 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:43:57.615297    9589 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:43:57.618261    9589 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:43:57.620109    9589 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:43:57.620180    9589 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:43:57.620222    9589 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:43:57.624244    9589 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:43:57.631092    9589 start.go:297] selected driver: qemu2
	I0729 03:43:57.631099    9589 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:43:57.631105    9589 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:43:57.633330    9589 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:43:57.637181    9589 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:43:57.640322    9589 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:43:57.640337    9589 cni.go:84] Creating CNI manager for "false"
	I0729 03:43:57.640366    9589 start.go:340] cluster config:
	{Name:false-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:43:57.644112    9589 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:43:57.652244    9589 out.go:177] * Starting "false-218000" primary control-plane node in "false-218000" cluster
	I0729 03:43:57.656301    9589 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:43:57.656315    9589 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:43:57.656323    9589 cache.go:56] Caching tarball of preloaded images
	I0729 03:43:57.656388    9589 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:43:57.656393    9589 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:43:57.656455    9589 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/false-218000/config.json ...
	I0729 03:43:57.656465    9589 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/false-218000/config.json: {Name:mk38e5a9cada32833685902d04ec74f6f418b56d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:43:57.656684    9589 start.go:360] acquireMachinesLock for false-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:43:57.656716    9589 start.go:364] duration metric: took 26.959µs to acquireMachinesLock for "false-218000"
	I0729 03:43:57.656727    9589 start.go:93] Provisioning new machine with config: &{Name:false-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:43:57.656754    9589 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:43:57.665269    9589 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:43:57.681842    9589 start.go:159] libmachine.API.Create for "false-218000" (driver="qemu2")
	I0729 03:43:57.681869    9589 client.go:168] LocalClient.Create starting
	I0729 03:43:57.681944    9589 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:43:57.681976    9589 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:57.681989    9589 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:57.682029    9589 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:43:57.682057    9589 main.go:141] libmachine: Decoding PEM data...
	I0729 03:43:57.682066    9589 main.go:141] libmachine: Parsing certificate...
	I0729 03:43:57.682498    9589 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:43:57.831488    9589 main.go:141] libmachine: Creating SSH key...
	I0729 03:43:57.912411    9589 main.go:141] libmachine: Creating Disk image...
	I0729 03:43:57.912418    9589 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:43:57.912622    9589 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/disk.qcow2
	I0729 03:43:57.921988    9589 main.go:141] libmachine: STDOUT: 
	I0729 03:43:57.922014    9589 main.go:141] libmachine: STDERR: 
	I0729 03:43:57.922067    9589 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/disk.qcow2 +20000M
	I0729 03:43:57.930201    9589 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:43:57.930213    9589 main.go:141] libmachine: STDERR: 
	I0729 03:43:57.930237    9589 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/disk.qcow2
	I0729 03:43:57.930241    9589 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:43:57.930253    9589 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:43:57.930285    9589 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:16:a6:25:ec:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/disk.qcow2
	I0729 03:43:57.931867    9589 main.go:141] libmachine: STDOUT: 
	I0729 03:43:57.931884    9589 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:43:57.931903    9589 client.go:171] duration metric: took 250.035ms to LocalClient.Create
	I0729 03:43:59.934005    9589 start.go:128] duration metric: took 2.277279958s to createHost
	I0729 03:43:59.934065    9589 start.go:83] releasing machines lock for "false-218000", held for 2.277386167s
	W0729 03:43:59.934113    9589 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:43:59.947557    9589 out.go:177] * Deleting "false-218000" in qemu2 ...
	W0729 03:43:59.974330    9589 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:43:59.974350    9589 start.go:729] Will try again in 5 seconds ...
	I0729 03:44:04.976434    9589 start.go:360] acquireMachinesLock for false-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:44:04.976777    9589 start.go:364] duration metric: took 277.583µs to acquireMachinesLock for "false-218000"
	I0729 03:44:04.976878    9589 start.go:93] Provisioning new machine with config: &{Name:false-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:44:04.976990    9589 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:44:04.986030    9589 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:44:05.025526    9589 start.go:159] libmachine.API.Create for "false-218000" (driver="qemu2")
	I0729 03:44:05.025574    9589 client.go:168] LocalClient.Create starting
	I0729 03:44:05.025681    9589 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:44:05.025740    9589 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:05.025757    9589 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:05.025810    9589 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:44:05.025849    9589 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:05.025864    9589 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:05.026496    9589 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:44:05.198353    9589 main.go:141] libmachine: Creating SSH key...
	I0729 03:44:05.270751    9589 main.go:141] libmachine: Creating Disk image...
	I0729 03:44:05.270761    9589 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:44:05.270959    9589 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/disk.qcow2
	I0729 03:44:05.280333    9589 main.go:141] libmachine: STDOUT: 
	I0729 03:44:05.280351    9589 main.go:141] libmachine: STDERR: 
	I0729 03:44:05.280404    9589 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/disk.qcow2 +20000M
	I0729 03:44:05.288258    9589 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:44:05.288273    9589 main.go:141] libmachine: STDERR: 
	I0729 03:44:05.288287    9589 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/disk.qcow2
	I0729 03:44:05.288294    9589 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:44:05.288312    9589 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:44:05.288360    9589 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:4b:3a:df:53:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/false-218000/disk.qcow2
	I0729 03:44:05.290022    9589 main.go:141] libmachine: STDOUT: 
	I0729 03:44:05.290036    9589 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:44:05.290048    9589 client.go:171] duration metric: took 264.473375ms to LocalClient.Create
	I0729 03:44:07.292195    9589 start.go:128] duration metric: took 2.315138667s to createHost
	I0729 03:44:07.292233    9589 start.go:83] releasing machines lock for "false-218000", held for 2.315487375s
	W0729 03:44:07.292413    9589 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:44:07.304056    9589 out.go:177] 
	W0729 03:44:07.307050    9589 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:44:07.307064    9589 out.go:239] * 
	* 
	W0729 03:44:07.308281    9589 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:44:07.320021    9589 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.962732833s)

                                                
                                                
-- stdout --
	* [enable-default-cni-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-218000" primary control-plane node in "enable-default-cni-218000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-218000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:44:09.452405    9699 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:44:09.452527    9699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:44:09.452530    9699 out.go:304] Setting ErrFile to fd 2...
	I0729 03:44:09.452532    9699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:44:09.452658    9699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:44:09.453745    9699 out.go:298] Setting JSON to false
	I0729 03:44:09.470096    9699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6218,"bootTime":1722243631,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:44:09.470182    9699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:44:09.476249    9699 out.go:177] * [enable-default-cni-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:44:09.482120    9699 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:44:09.482171    9699 notify.go:220] Checking for updates...
	I0729 03:44:09.489083    9699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:44:09.492138    9699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:44:09.495122    9699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:44:09.498110    9699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:44:09.501096    9699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:44:09.504437    9699 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:44:09.504515    9699 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:44:09.504559    9699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:44:09.507034    9699 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:44:09.514162    9699 start.go:297] selected driver: qemu2
	I0729 03:44:09.514169    9699 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:44:09.514175    9699 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:44:09.516430    9699 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:44:09.520035    9699 out.go:177] * Automatically selected the socket_vmnet network
	E0729 03:44:09.523203    9699 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0729 03:44:09.523214    9699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:44:09.523229    9699 cni.go:84] Creating CNI manager for "bridge"
	I0729 03:44:09.523232    9699 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:44:09.523253    9699 start.go:340] cluster config:
	{Name:enable-default-cni-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:44:09.526664    9699 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:44:09.535121    9699 out.go:177] * Starting "enable-default-cni-218000" primary control-plane node in "enable-default-cni-218000" cluster
	I0729 03:44:09.539082    9699 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:44:09.539094    9699 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:44:09.539099    9699 cache.go:56] Caching tarball of preloaded images
	I0729 03:44:09.539142    9699 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:44:09.539147    9699 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:44:09.539201    9699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/enable-default-cni-218000/config.json ...
	I0729 03:44:09.539211    9699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/enable-default-cni-218000/config.json: {Name:mk89e9e61ed08b993a4bbed3db5cceb7c39e2b83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:44:09.539471    9699 start.go:360] acquireMachinesLock for enable-default-cni-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:44:09.539505    9699 start.go:364] duration metric: took 24.125µs to acquireMachinesLock for "enable-default-cni-218000"
	I0729 03:44:09.539515    9699 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:44:09.539542    9699 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:44:09.547177    9699 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:44:09.561833    9699 start.go:159] libmachine.API.Create for "enable-default-cni-218000" (driver="qemu2")
	I0729 03:44:09.561865    9699 client.go:168] LocalClient.Create starting
	I0729 03:44:09.561931    9699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:44:09.561966    9699 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:09.561974    9699 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:09.562019    9699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:44:09.562041    9699 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:09.562045    9699 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:09.562508    9699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:44:09.712447    9699 main.go:141] libmachine: Creating SSH key...
	I0729 03:44:09.769934    9699 main.go:141] libmachine: Creating Disk image...
	I0729 03:44:09.769939    9699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:44:09.770144    9699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/disk.qcow2
	I0729 03:44:09.779474    9699 main.go:141] libmachine: STDOUT: 
	I0729 03:44:09.779496    9699 main.go:141] libmachine: STDERR: 
	I0729 03:44:09.779552    9699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/disk.qcow2 +20000M
	I0729 03:44:09.787815    9699 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:44:09.787831    9699 main.go:141] libmachine: STDERR: 
	I0729 03:44:09.787854    9699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/disk.qcow2
	I0729 03:44:09.787858    9699 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:44:09.787873    9699 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:44:09.787903    9699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:f3:32:0d:ce:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/disk.qcow2
	I0729 03:44:09.789587    9699 main.go:141] libmachine: STDOUT: 
	I0729 03:44:09.789604    9699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:44:09.789625    9699 client.go:171] duration metric: took 227.760084ms to LocalClient.Create
	I0729 03:44:11.791741    9699 start.go:128] duration metric: took 2.252218666s to createHost
	I0729 03:44:11.791796    9699 start.go:83] releasing machines lock for "enable-default-cni-218000", held for 2.252316041s
	W0729 03:44:11.791872    9699 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:44:11.797905    9699 out.go:177] * Deleting "enable-default-cni-218000" in qemu2 ...
	W0729 03:44:11.838791    9699 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:44:11.838844    9699 start.go:729] Will try again in 5 seconds ...
	I0729 03:44:16.840901    9699 start.go:360] acquireMachinesLock for enable-default-cni-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:44:16.841056    9699 start.go:364] duration metric: took 124.833µs to acquireMachinesLock for "enable-default-cni-218000"
	I0729 03:44:16.841076    9699 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:44:16.841146    9699 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:44:16.849400    9699 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:44:16.868059    9699 start.go:159] libmachine.API.Create for "enable-default-cni-218000" (driver="qemu2")
	I0729 03:44:16.868108    9699 client.go:168] LocalClient.Create starting
	I0729 03:44:16.868180    9699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:44:16.868218    9699 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:16.868227    9699 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:16.868268    9699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:44:16.868292    9699 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:16.868298    9699 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:16.868600    9699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:44:17.019430    9699 main.go:141] libmachine: Creating SSH key...
	I0729 03:44:17.329464    9699 main.go:141] libmachine: Creating Disk image...
	I0729 03:44:17.329478    9699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:44:17.330020    9699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/disk.qcow2
	I0729 03:44:17.339688    9699 main.go:141] libmachine: STDOUT: 
	I0729 03:44:17.339710    9699 main.go:141] libmachine: STDERR: 
	I0729 03:44:17.339763    9699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/disk.qcow2 +20000M
	I0729 03:44:17.347949    9699 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:44:17.347963    9699 main.go:141] libmachine: STDERR: 
	I0729 03:44:17.347975    9699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/disk.qcow2
	I0729 03:44:17.347981    9699 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:44:17.347992    9699 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:44:17.348034    9699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:ca:d6:c1:db:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/enable-default-cni-218000/disk.qcow2
	I0729 03:44:17.349736    9699 main.go:141] libmachine: STDOUT: 
	I0729 03:44:17.349752    9699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:44:17.349763    9699 client.go:171] duration metric: took 481.660042ms to LocalClient.Create
	I0729 03:44:19.352043    9699 start.go:128] duration metric: took 2.510912583s to createHost
	I0729 03:44:19.352120    9699 start.go:83] releasing machines lock for "enable-default-cni-218000", held for 2.51109875s
	W0729 03:44:19.352518    9699 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:44:19.363251    9699 out.go:177] 
	W0729 03:44:19.370132    9699 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:44:19.370182    9699 out.go:239] * 
	* 
	W0729 03:44:19.371780    9699 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:44:19.377204    9699 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.80960875s)

                                                
                                                
-- stdout --
	* [flannel-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-218000" primary control-plane node in "flannel-218000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-218000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:44:21.485223    9810 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:44:21.485338    9810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:44:21.485343    9810 out.go:304] Setting ErrFile to fd 2...
	I0729 03:44:21.485346    9810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:44:21.485464    9810 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:44:21.486625    9810 out.go:298] Setting JSON to false
	I0729 03:44:21.503124    9810 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6230,"bootTime":1722243631,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:44:21.503206    9810 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:44:21.510125    9810 out.go:177] * [flannel-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:44:21.518342    9810 notify.go:220] Checking for updates...
	I0729 03:44:21.522272    9810 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:44:21.525324    9810 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:44:21.529401    9810 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:44:21.533227    9810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:44:21.536322    9810 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:44:21.539369    9810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:44:21.542582    9810 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:44:21.542649    9810 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:44:21.542689    9810 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:44:21.547349    9810 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:44:21.554330    9810 start.go:297] selected driver: qemu2
	I0729 03:44:21.554338    9810 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:44:21.554344    9810 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:44:21.556436    9810 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:44:21.560339    9810 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:44:21.563450    9810 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:44:21.563466    9810 cni.go:84] Creating CNI manager for "flannel"
	I0729 03:44:21.563472    9810 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0729 03:44:21.563505    9810 start.go:340] cluster config:
	{Name:flannel-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:44:21.566895    9810 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:44:21.571374    9810 out.go:177] * Starting "flannel-218000" primary control-plane node in "flannel-218000" cluster
	I0729 03:44:21.579265    9810 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:44:21.579292    9810 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:44:21.579303    9810 cache.go:56] Caching tarball of preloaded images
	I0729 03:44:21.579370    9810 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:44:21.579375    9810 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:44:21.579455    9810 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/flannel-218000/config.json ...
	I0729 03:44:21.579466    9810 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/flannel-218000/config.json: {Name:mk9b71e833923b4eff2bae69c8ec4a2e411771c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:44:21.579676    9810 start.go:360] acquireMachinesLock for flannel-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:44:21.579708    9810 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "flannel-218000"
	I0729 03:44:21.579723    9810 start.go:93] Provisioning new machine with config: &{Name:flannel-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:44:21.579755    9810 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:44:21.586369    9810 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:44:21.602516    9810 start.go:159] libmachine.API.Create for "flannel-218000" (driver="qemu2")
	I0729 03:44:21.602546    9810 client.go:168] LocalClient.Create starting
	I0729 03:44:21.602612    9810 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:44:21.602641    9810 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:21.602651    9810 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:21.602692    9810 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:44:21.602715    9810 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:21.602726    9810 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:21.603067    9810 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:44:21.756422    9810 main.go:141] libmachine: Creating SSH key...
	I0729 03:44:21.892235    9810 main.go:141] libmachine: Creating Disk image...
	I0729 03:44:21.892243    9810 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:44:21.892457    9810 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/disk.qcow2
	I0729 03:44:21.901868    9810 main.go:141] libmachine: STDOUT: 
	I0729 03:44:21.901932    9810 main.go:141] libmachine: STDERR: 
	I0729 03:44:21.901995    9810 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/disk.qcow2 +20000M
	I0729 03:44:21.910047    9810 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:44:21.910061    9810 main.go:141] libmachine: STDERR: 
	I0729 03:44:21.910089    9810 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/disk.qcow2
	I0729 03:44:21.910094    9810 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:44:21.910109    9810 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:44:21.910132    9810 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:20:40:15:89:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/disk.qcow2
	I0729 03:44:21.911792    9810 main.go:141] libmachine: STDOUT: 
	I0729 03:44:21.911809    9810 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:44:21.911829    9810 client.go:171] duration metric: took 309.283792ms to LocalClient.Create
	I0729 03:44:23.914011    9810 start.go:128] duration metric: took 2.334274708s to createHost
	I0729 03:44:23.914084    9810 start.go:83] releasing machines lock for "flannel-218000", held for 2.334413708s
	W0729 03:44:23.914139    9810 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:44:23.920164    9810 out.go:177] * Deleting "flannel-218000" in qemu2 ...
	W0729 03:44:23.944402    9810 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:44:23.944442    9810 start.go:729] Will try again in 5 seconds ...
	I0729 03:44:28.946539    9810 start.go:360] acquireMachinesLock for flannel-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:44:28.947020    9810 start.go:364] duration metric: took 376.125µs to acquireMachinesLock for "flannel-218000"
	I0729 03:44:28.947104    9810 start.go:93] Provisioning new machine with config: &{Name:flannel-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:44:28.947369    9810 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:44:28.955003    9810 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:44:28.999907    9810 start.go:159] libmachine.API.Create for "flannel-218000" (driver="qemu2")
	I0729 03:44:28.999963    9810 client.go:168] LocalClient.Create starting
	I0729 03:44:29.000079    9810 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:44:29.000157    9810 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:29.000178    9810 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:29.000243    9810 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:44:29.000299    9810 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:29.000310    9810 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:29.000903    9810 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:44:29.158741    9810 main.go:141] libmachine: Creating SSH key...
	I0729 03:44:29.210360    9810 main.go:141] libmachine: Creating Disk image...
	I0729 03:44:29.210366    9810 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:44:29.210577    9810 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/disk.qcow2
	I0729 03:44:29.219802    9810 main.go:141] libmachine: STDOUT: 
	I0729 03:44:29.219829    9810 main.go:141] libmachine: STDERR: 
	I0729 03:44:29.219880    9810 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/disk.qcow2 +20000M
	I0729 03:44:29.227905    9810 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:44:29.227927    9810 main.go:141] libmachine: STDERR: 
	I0729 03:44:29.227940    9810 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/disk.qcow2
	I0729 03:44:29.227944    9810 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:44:29.227954    9810 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:44:29.227988    9810 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:da:51:16:b0:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/flannel-218000/disk.qcow2
	I0729 03:44:29.229606    9810 main.go:141] libmachine: STDOUT: 
	I0729 03:44:29.229621    9810 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:44:29.229633    9810 client.go:171] duration metric: took 229.664917ms to LocalClient.Create
	I0729 03:44:31.231687    9810 start.go:128] duration metric: took 2.284345666s to createHost
	I0729 03:44:31.231743    9810 start.go:83] releasing machines lock for "flannel-218000", held for 2.284735959s
	W0729 03:44:31.231920    9810 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:44:31.239027    9810 out.go:177] 
	W0729 03:44:31.246198    9810 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:44:31.246206    9810 out.go:239] * 
	* 
	W0729 03:44:31.246849    9810 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:44:31.263160    9810 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.793840625s)

                                                
                                                
-- stdout --
	* [bridge-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-218000" primary control-plane node in "bridge-218000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-218000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:44:33.567362    9930 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:44:33.567495    9930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:44:33.567498    9930 out.go:304] Setting ErrFile to fd 2...
	I0729 03:44:33.567500    9930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:44:33.567626    9930 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:44:33.568713    9930 out.go:298] Setting JSON to false
	I0729 03:44:33.584906    9930 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6242,"bootTime":1722243631,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:44:33.584979    9930 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:44:33.591360    9930 out.go:177] * [bridge-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:44:33.599226    9930 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:44:33.599275    9930 notify.go:220] Checking for updates...
	I0729 03:44:33.607113    9930 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:44:33.610133    9930 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:44:33.617193    9930 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:44:33.621068    9930 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:44:33.624155    9930 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:44:33.627539    9930 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:44:33.627599    9930 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:44:33.627644    9930 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:44:33.631064    9930 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:44:33.638073    9930 start.go:297] selected driver: qemu2
	I0729 03:44:33.638080    9930 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:44:33.638088    9930 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:44:33.640264    9930 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:44:33.643072    9930 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:44:33.646193    9930 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:44:33.646211    9930 cni.go:84] Creating CNI manager for "bridge"
	I0729 03:44:33.646216    9930 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:44:33.646256    9930 start.go:340] cluster config:
	{Name:bridge-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:44:33.649643    9930 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:44:33.657080    9930 out.go:177] * Starting "bridge-218000" primary control-plane node in "bridge-218000" cluster
	I0729 03:44:33.661150    9930 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:44:33.661164    9930 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:44:33.661175    9930 cache.go:56] Caching tarball of preloaded images
	I0729 03:44:33.661234    9930 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:44:33.661239    9930 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:44:33.661326    9930 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/bridge-218000/config.json ...
	I0729 03:44:33.661342    9930 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/bridge-218000/config.json: {Name:mkaf11aa00e5ed1b3df17445eef35f1786203d16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:44:33.661543    9930 start.go:360] acquireMachinesLock for bridge-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:44:33.661572    9930 start.go:364] duration metric: took 23.542µs to acquireMachinesLock for "bridge-218000"
	I0729 03:44:33.661582    9930 start.go:93] Provisioning new machine with config: &{Name:bridge-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:44:33.661608    9930 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:44:33.670085    9930 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:44:33.685074    9930 start.go:159] libmachine.API.Create for "bridge-218000" (driver="qemu2")
	I0729 03:44:33.685104    9930 client.go:168] LocalClient.Create starting
	I0729 03:44:33.685168    9930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:44:33.685206    9930 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:33.685216    9930 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:33.685251    9930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:44:33.685277    9930 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:33.685287    9930 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:33.685632    9930 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:44:33.836358    9930 main.go:141] libmachine: Creating SSH key...
	I0729 03:44:33.991308    9930 main.go:141] libmachine: Creating Disk image...
	I0729 03:44:33.991316    9930 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:44:33.991558    9930 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/disk.qcow2
	I0729 03:44:34.001402    9930 main.go:141] libmachine: STDOUT: 
	I0729 03:44:34.001423    9930 main.go:141] libmachine: STDERR: 
	I0729 03:44:34.001475    9930 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/disk.qcow2 +20000M
	I0729 03:44:34.009832    9930 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:44:34.009846    9930 main.go:141] libmachine: STDERR: 
	I0729 03:44:34.009870    9930 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/disk.qcow2
	I0729 03:44:34.009880    9930 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:44:34.009890    9930 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:44:34.009913    9930 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:3b:9f:e2:ea:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/disk.qcow2
	I0729 03:44:34.011672    9930 main.go:141] libmachine: STDOUT: 
	I0729 03:44:34.011685    9930 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:44:34.011702    9930 client.go:171] duration metric: took 326.59925ms to LocalClient.Create
	I0729 03:44:36.013875    9930 start.go:128] duration metric: took 2.352279292s to createHost
	I0729 03:44:36.013961    9930 start.go:83] releasing machines lock for "bridge-218000", held for 2.35242575s
	W0729 03:44:36.014049    9930 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:44:36.027212    9930 out.go:177] * Deleting "bridge-218000" in qemu2 ...
	W0729 03:44:36.049255    9930 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:44:36.049288    9930 start.go:729] Will try again in 5 seconds ...
	I0729 03:44:41.051327    9930 start.go:360] acquireMachinesLock for bridge-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:44:41.051488    9930 start.go:364] duration metric: took 129.75µs to acquireMachinesLock for "bridge-218000"
	I0729 03:44:41.051506    9930 start.go:93] Provisioning new machine with config: &{Name:bridge-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:44:41.051581    9930 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:44:41.060299    9930 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:44:41.078281    9930 start.go:159] libmachine.API.Create for "bridge-218000" (driver="qemu2")
	I0729 03:44:41.078315    9930 client.go:168] LocalClient.Create starting
	I0729 03:44:41.078376    9930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:44:41.078414    9930 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:41.078424    9930 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:41.078455    9930 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:44:41.078477    9930 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:41.078482    9930 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:41.078805    9930 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:44:41.227079    9930 main.go:141] libmachine: Creating SSH key...
	I0729 03:44:41.276528    9930 main.go:141] libmachine: Creating Disk image...
	I0729 03:44:41.276534    9930 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:44:41.276739    9930 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/disk.qcow2
	I0729 03:44:41.286215    9930 main.go:141] libmachine: STDOUT: 
	I0729 03:44:41.286236    9930 main.go:141] libmachine: STDERR: 
	I0729 03:44:41.286286    9930 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/disk.qcow2 +20000M
	I0729 03:44:41.294505    9930 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:44:41.294519    9930 main.go:141] libmachine: STDERR: 
	I0729 03:44:41.294528    9930 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/disk.qcow2
	I0729 03:44:41.294535    9930 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:44:41.294544    9930 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:44:41.294568    9930 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:cc:c4:5a:a3:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/bridge-218000/disk.qcow2
	I0729 03:44:41.296279    9930 main.go:141] libmachine: STDOUT: 
	I0729 03:44:41.296295    9930 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:44:41.296308    9930 client.go:171] duration metric: took 217.994417ms to LocalClient.Create
	I0729 03:44:43.298447    9930 start.go:128] duration metric: took 2.246878s to createHost
	I0729 03:44:43.298509    9930 start.go:83] releasing machines lock for "bridge-218000", held for 2.247055792s
	W0729 03:44:43.298812    9930 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:44:43.306550    9930 out.go:177] 
	W0729 03:44:43.311618    9930 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:44:43.311650    9930 out.go:239] * 
	* 
	W0729 03:44:43.313069    9930 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:44:43.321578    9930 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-218000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.909888667s)

                                                
                                                
-- stdout --
	* [kubenet-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-218000" primary control-plane node in "kubenet-218000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-218000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:44:45.514008   10039 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:44:45.514141   10039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:44:45.514144   10039 out.go:304] Setting ErrFile to fd 2...
	I0729 03:44:45.514147   10039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:44:45.514288   10039 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:44:45.515449   10039 out.go:298] Setting JSON to false
	I0729 03:44:45.532521   10039 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6254,"bootTime":1722243631,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:44:45.532594   10039 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:44:45.536525   10039 out.go:177] * [kubenet-218000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:44:45.543374   10039 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:44:45.543483   10039 notify.go:220] Checking for updates...
	I0729 03:44:45.551239   10039 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:44:45.554460   10039 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:44:45.558430   10039 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:44:45.559948   10039 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:44:45.563442   10039 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:44:45.566716   10039 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:44:45.566784   10039 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:44:45.566831   10039 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:44:45.571278   10039 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:44:45.578391   10039 start.go:297] selected driver: qemu2
	I0729 03:44:45.578397   10039 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:44:45.578406   10039 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:44:45.580780   10039 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:44:45.584442   10039 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:44:45.588450   10039 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:44:45.588481   10039 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0729 03:44:45.588514   10039 start.go:340] cluster config:
	{Name:kubenet-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:44:45.592161   10039 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:44:45.600397   10039 out.go:177] * Starting "kubenet-218000" primary control-plane node in "kubenet-218000" cluster
	I0729 03:44:45.604473   10039 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:44:45.604488   10039 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:44:45.604498   10039 cache.go:56] Caching tarball of preloaded images
	I0729 03:44:45.604555   10039 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:44:45.604561   10039 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:44:45.604637   10039 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/kubenet-218000/config.json ...
	I0729 03:44:45.604649   10039 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/kubenet-218000/config.json: {Name:mk5c82be95ee67ad2f10af72483d01a518c337a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:44:45.604979   10039 start.go:360] acquireMachinesLock for kubenet-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:44:45.605017   10039 start.go:364] duration metric: took 31.75µs to acquireMachinesLock for "kubenet-218000"
	I0729 03:44:45.605030   10039 start.go:93] Provisioning new machine with config: &{Name:kubenet-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:44:45.605056   10039 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:44:45.609400   10039 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:44:45.626639   10039 start.go:159] libmachine.API.Create for "kubenet-218000" (driver="qemu2")
	I0729 03:44:45.626663   10039 client.go:168] LocalClient.Create starting
	I0729 03:44:45.626738   10039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:44:45.626776   10039 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:45.626785   10039 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:45.626834   10039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:44:45.626858   10039 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:45.626865   10039 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:45.627246   10039 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:44:45.779677   10039 main.go:141] libmachine: Creating SSH key...
	I0729 03:44:45.878737   10039 main.go:141] libmachine: Creating Disk image...
	I0729 03:44:45.878743   10039 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:44:45.879176   10039 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/disk.qcow2
	I0729 03:44:45.888841   10039 main.go:141] libmachine: STDOUT: 
	I0729 03:44:45.888863   10039 main.go:141] libmachine: STDERR: 
	I0729 03:44:45.888916   10039 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/disk.qcow2 +20000M
	I0729 03:44:45.897264   10039 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:44:45.897285   10039 main.go:141] libmachine: STDERR: 
	I0729 03:44:45.897306   10039 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/disk.qcow2
	I0729 03:44:45.897311   10039 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:44:45.897320   10039 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:44:45.897345   10039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:53:45:ae:d0:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/disk.qcow2
	I0729 03:44:45.898964   10039 main.go:141] libmachine: STDOUT: 
	I0729 03:44:45.898986   10039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:44:45.899003   10039 client.go:171] duration metric: took 272.342041ms to LocalClient.Create
	I0729 03:44:47.901060   10039 start.go:128] duration metric: took 2.296035625s to createHost
	I0729 03:44:47.901086   10039 start.go:83] releasing machines lock for "kubenet-218000", held for 2.296107708s
	W0729 03:44:47.901118   10039 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:44:47.911843   10039 out.go:177] * Deleting "kubenet-218000" in qemu2 ...
	W0729 03:44:47.919886   10039 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:44:47.919893   10039 start.go:729] Will try again in 5 seconds ...
	I0729 03:44:52.921918   10039 start.go:360] acquireMachinesLock for kubenet-218000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:44:52.922099   10039 start.go:364] duration metric: took 146.958µs to acquireMachinesLock for "kubenet-218000"
	I0729 03:44:52.922138   10039 start.go:93] Provisioning new machine with config: &{Name:kubenet-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:44:52.922206   10039 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:44:52.930471   10039 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 03:44:52.952096   10039 start.go:159] libmachine.API.Create for "kubenet-218000" (driver="qemu2")
	I0729 03:44:52.952128   10039 client.go:168] LocalClient.Create starting
	I0729 03:44:52.952193   10039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:44:52.952234   10039 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:52.952245   10039 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:52.952284   10039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:44:52.952310   10039 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:52.952317   10039 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:52.952639   10039 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:44:53.101964   10039 main.go:141] libmachine: Creating SSH key...
	I0729 03:44:53.325657   10039 main.go:141] libmachine: Creating Disk image...
	I0729 03:44:53.325667   10039 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:44:53.325894   10039 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/disk.qcow2
	I0729 03:44:53.335614   10039 main.go:141] libmachine: STDOUT: 
	I0729 03:44:53.335632   10039 main.go:141] libmachine: STDERR: 
	I0729 03:44:53.335685   10039 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/disk.qcow2 +20000M
	I0729 03:44:53.343702   10039 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:44:53.343714   10039 main.go:141] libmachine: STDERR: 
	I0729 03:44:53.343726   10039 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/disk.qcow2
	I0729 03:44:53.343731   10039 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:44:53.343740   10039 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:44:53.343789   10039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:4b:18:0d:85:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/kubenet-218000/disk.qcow2
	I0729 03:44:53.345424   10039 main.go:141] libmachine: STDOUT: 
	I0729 03:44:53.345437   10039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:44:53.345450   10039 client.go:171] duration metric: took 393.326667ms to LocalClient.Create
	I0729 03:44:55.345929   10039 start.go:128] duration metric: took 2.423735875s to createHost
	I0729 03:44:55.346016   10039 start.go:83] releasing machines lock for "kubenet-218000", held for 2.423952208s
	W0729 03:44:55.346471   10039 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:44:55.361117   10039 out.go:177] 
	W0729 03:44:55.365406   10039 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:44:55.365463   10039 out.go:239] * 
	* 
	W0729 03:44:55.368521   10039 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:44:55.376262   10039 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-363000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-363000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.00291225s)

                                                
                                                
-- stdout --
	* [old-k8s-version-363000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-363000" primary control-plane node in "old-k8s-version-363000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-363000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:44:57.640077   10153 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:44:57.640189   10153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:44:57.640192   10153 out.go:304] Setting ErrFile to fd 2...
	I0729 03:44:57.640195   10153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:44:57.640317   10153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:44:57.641359   10153 out.go:298] Setting JSON to false
	I0729 03:44:57.657942   10153 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6266,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:44:57.658009   10153 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:44:57.664427   10153 out.go:177] * [old-k8s-version-363000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:44:57.672431   10153 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:44:57.672515   10153 notify.go:220] Checking for updates...
	I0729 03:44:57.681314   10153 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:44:57.682735   10153 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:44:57.686368   10153 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:44:57.689397   10153 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:44:57.692439   10153 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:44:57.695697   10153 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:44:57.695770   10153 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:44:57.695844   10153 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:44:57.700361   10153 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:44:57.707355   10153 start.go:297] selected driver: qemu2
	I0729 03:44:57.707362   10153 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:44:57.707368   10153 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:44:57.709789   10153 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:44:57.714399   10153 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:44:57.717506   10153 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:44:57.717523   10153 cni.go:84] Creating CNI manager for ""
	I0729 03:44:57.717537   10153 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 03:44:57.717572   10153 start.go:340] cluster config:
	{Name:old-k8s-version-363000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:44:57.721194   10153 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:44:57.728361   10153 out.go:177] * Starting "old-k8s-version-363000" primary control-plane node in "old-k8s-version-363000" cluster
	I0729 03:44:57.732338   10153 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:44:57.732350   10153 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 03:44:57.732365   10153 cache.go:56] Caching tarball of preloaded images
	I0729 03:44:57.732409   10153 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:44:57.732414   10153 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 03:44:57.732469   10153 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/old-k8s-version-363000/config.json ...
	I0729 03:44:57.732482   10153 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/old-k8s-version-363000/config.json: {Name:mk61b57fc19962250fdba344f3dcd4d3909b542b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:44:57.732693   10153 start.go:360] acquireMachinesLock for old-k8s-version-363000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:44:57.732726   10153 start.go:364] duration metric: took 25.916µs to acquireMachinesLock for "old-k8s-version-363000"
	I0729 03:44:57.732737   10153 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:44:57.732765   10153 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:44:57.741253   10153 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:44:57.758276   10153 start.go:159] libmachine.API.Create for "old-k8s-version-363000" (driver="qemu2")
	I0729 03:44:57.758305   10153 client.go:168] LocalClient.Create starting
	I0729 03:44:57.758376   10153 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:44:57.758409   10153 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:57.758425   10153 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:57.758461   10153 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:44:57.758484   10153 main.go:141] libmachine: Decoding PEM data...
	I0729 03:44:57.758490   10153 main.go:141] libmachine: Parsing certificate...
	I0729 03:44:57.758852   10153 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:44:57.908169   10153 main.go:141] libmachine: Creating SSH key...
	I0729 03:44:58.106523   10153 main.go:141] libmachine: Creating Disk image...
	I0729 03:44:58.106533   10153 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:44:58.106781   10153 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/disk.qcow2
	I0729 03:44:58.116715   10153 main.go:141] libmachine: STDOUT: 
	I0729 03:44:58.116742   10153 main.go:141] libmachine: STDERR: 
	I0729 03:44:58.116801   10153 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/disk.qcow2 +20000M
	I0729 03:44:58.124898   10153 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:44:58.124914   10153 main.go:141] libmachine: STDERR: 
	I0729 03:44:58.124934   10153 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/disk.qcow2
	I0729 03:44:58.124939   10153 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:44:58.124954   10153 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:44:58.124990   10153 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:a2:83:28:57:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/disk.qcow2
	I0729 03:44:58.126708   10153 main.go:141] libmachine: STDOUT: 
	I0729 03:44:58.126722   10153 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:44:58.126745   10153 client.go:171] duration metric: took 368.442208ms to LocalClient.Create
	I0729 03:45:00.128889   10153 start.go:128] duration metric: took 2.396143291s to createHost
	I0729 03:45:00.128975   10153 start.go:83] releasing machines lock for "old-k8s-version-363000", held for 2.396287792s
	W0729 03:45:00.129046   10153 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:00.141857   10153 out.go:177] * Deleting "old-k8s-version-363000" in qemu2 ...
	W0729 03:45:00.162787   10153 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:00.162811   10153 start.go:729] Will try again in 5 seconds ...
	I0729 03:45:05.164879   10153 start.go:360] acquireMachinesLock for old-k8s-version-363000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:05.165038   10153 start.go:364] duration metric: took 108.958µs to acquireMachinesLock for "old-k8s-version-363000"
	I0729 03:45:05.165081   10153 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:45:05.165165   10153 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:45:05.174396   10153 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:45:05.195699   10153 start.go:159] libmachine.API.Create for "old-k8s-version-363000" (driver="qemu2")
	I0729 03:45:05.195731   10153 client.go:168] LocalClient.Create starting
	I0729 03:45:05.195815   10153 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:45:05.195849   10153 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:05.195858   10153 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:05.195898   10153 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:45:05.195924   10153 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:05.195930   10153 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:05.196252   10153 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:45:05.347182   10153 main.go:141] libmachine: Creating SSH key...
	I0729 03:45:05.548876   10153 main.go:141] libmachine: Creating Disk image...
	I0729 03:45:05.548885   10153 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:45:05.549120   10153 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/disk.qcow2
	I0729 03:45:05.558801   10153 main.go:141] libmachine: STDOUT: 
	I0729 03:45:05.558820   10153 main.go:141] libmachine: STDERR: 
	I0729 03:45:05.558903   10153 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/disk.qcow2 +20000M
	I0729 03:45:05.566992   10153 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:45:05.567006   10153 main.go:141] libmachine: STDERR: 
	I0729 03:45:05.567018   10153 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/disk.qcow2
	I0729 03:45:05.567022   10153 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:45:05.567035   10153 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:05.567068   10153 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:ee:59:e7:3b:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/disk.qcow2
	I0729 03:45:05.568696   10153 main.go:141] libmachine: STDOUT: 
	I0729 03:45:05.568709   10153 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:05.568721   10153 client.go:171] duration metric: took 372.992125ms to LocalClient.Create
	I0729 03:45:07.570892   10153 start.go:128] duration metric: took 2.405731625s to createHost
	I0729 03:45:07.571000   10153 start.go:83] releasing machines lock for "old-k8s-version-363000", held for 2.405991042s
	W0729 03:45:07.571453   10153 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-363000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:07.581222   10153 out.go:177] 
	W0729 03:45:07.589356   10153 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:45:07.589383   10153 out.go:239] * 
	* 
	W0729 03:45:07.592037   10153 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:45:07.605161   10153 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-363000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000: exit status 7 (66.324417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-363000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-363000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-363000 create -f testdata/busybox.yaml: exit status 1 (30.214541ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-363000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-363000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000: exit status 7 (29.143125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-363000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000: exit status 7 (29.091167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-363000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-363000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-363000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-363000 describe deploy/metrics-server -n kube-system: exit status 1 (28.233083ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-363000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-363000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000: exit status 7 (29.21325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-363000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-363000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-363000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.184078125s)

                                                
                                                
-- stdout --
	* [old-k8s-version-363000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-363000" primary control-plane node in "old-k8s-version-363000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-363000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-363000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:45:10.958920   10206 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:45:10.959053   10206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:10.959056   10206 out.go:304] Setting ErrFile to fd 2...
	I0729 03:45:10.959059   10206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:10.959178   10206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:45:10.960220   10206 out.go:298] Setting JSON to false
	I0729 03:45:10.977054   10206 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6279,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:45:10.977125   10206 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:45:10.982399   10206 out.go:177] * [old-k8s-version-363000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:45:10.989199   10206 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:45:10.989320   10206 notify.go:220] Checking for updates...
	I0729 03:45:10.996312   10206 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:45:10.999237   10206 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:45:11.002318   10206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:45:11.005304   10206 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:45:11.008309   10206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:45:11.011572   10206 config.go:182] Loaded profile config "old-k8s-version-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 03:45:11.015296   10206 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 03:45:11.018300   10206 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:45:11.022223   10206 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:45:11.029192   10206 start.go:297] selected driver: qemu2
	I0729 03:45:11.029199   10206 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:45:11.029253   10206 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:45:11.031680   10206 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:45:11.031719   10206 cni.go:84] Creating CNI manager for ""
	I0729 03:45:11.031727   10206 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 03:45:11.031744   10206 start.go:340] cluster config:
	{Name:old-k8s-version-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-363000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:45:11.035234   10206 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:11.043258   10206 out.go:177] * Starting "old-k8s-version-363000" primary control-plane node in "old-k8s-version-363000" cluster
	I0729 03:45:11.047266   10206 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:45:11.047278   10206 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 03:45:11.047285   10206 cache.go:56] Caching tarball of preloaded images
	I0729 03:45:11.047333   10206 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:45:11.047338   10206 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 03:45:11.047389   10206 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/old-k8s-version-363000/config.json ...
	I0729 03:45:11.047770   10206 start.go:360] acquireMachinesLock for old-k8s-version-363000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:11.047796   10206 start.go:364] duration metric: took 19.417µs to acquireMachinesLock for "old-k8s-version-363000"
	I0729 03:45:11.047805   10206 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:45:11.047812   10206 fix.go:54] fixHost starting: 
	I0729 03:45:11.047923   10206 fix.go:112] recreateIfNeeded on old-k8s-version-363000: state=Stopped err=<nil>
	W0729 03:45:11.047930   10206 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:45:11.052254   10206 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-363000" ...
	I0729 03:45:11.060258   10206 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:11.060287   10206 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:ee:59:e7:3b:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/disk.qcow2
	I0729 03:45:11.062095   10206 main.go:141] libmachine: STDOUT: 
	I0729 03:45:11.062116   10206 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:11.062142   10206 fix.go:56] duration metric: took 14.331042ms for fixHost
	I0729 03:45:11.062147   10206 start.go:83] releasing machines lock for "old-k8s-version-363000", held for 14.347416ms
	W0729 03:45:11.062151   10206 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:45:11.062178   10206 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:11.062182   10206 start.go:729] Will try again in 5 seconds ...
	I0729 03:45:16.064310   10206 start.go:360] acquireMachinesLock for old-k8s-version-363000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:16.064809   10206 start.go:364] duration metric: took 384.917µs to acquireMachinesLock for "old-k8s-version-363000"
	I0729 03:45:16.064879   10206 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:45:16.064895   10206 fix.go:54] fixHost starting: 
	I0729 03:45:16.065422   10206 fix.go:112] recreateIfNeeded on old-k8s-version-363000: state=Stopped err=<nil>
	W0729 03:45:16.065441   10206 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:45:16.072284   10206 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-363000" ...
	I0729 03:45:16.075234   10206 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:16.075356   10206 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:ee:59:e7:3b:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/old-k8s-version-363000/disk.qcow2
	I0729 03:45:16.082543   10206 main.go:141] libmachine: STDOUT: 
	I0729 03:45:16.082584   10206 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:16.082630   10206 fix.go:56] duration metric: took 17.737542ms for fixHost
	I0729 03:45:16.082644   10206 start.go:83] releasing machines lock for "old-k8s-version-363000", held for 17.818041ms
	W0729 03:45:16.082748   10206 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-363000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-363000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:16.088640   10206 out.go:177] 
	W0729 03:45:16.092291   10206 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:45:16.092313   10206 out.go:239] * 
	* 
	W0729 03:45:16.093360   10206 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:45:16.105270   10206 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-363000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000: exit status 7 (46.9735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-363000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-363000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000: exit status 7 (30.845333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-363000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-363000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-363000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-363000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.630125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-363000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-363000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000: exit status 7 (28.515625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-363000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-363000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000: exit status 7 (28.214459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-363000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-363000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-363000 --alsologtostderr -v=1: exit status 83 (39.402375ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-363000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-363000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:45:16.347770   10225 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:45:16.348818   10225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:16.348825   10225 out.go:304] Setting ErrFile to fd 2...
	I0729 03:45:16.348828   10225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:16.348964   10225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:45:16.349181   10225 out.go:298] Setting JSON to false
	I0729 03:45:16.349188   10225 mustload.go:65] Loading cluster: old-k8s-version-363000
	I0729 03:45:16.349368   10225 config.go:182] Loaded profile config "old-k8s-version-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 03:45:16.351304   10225 out.go:177] * The control-plane node old-k8s-version-363000 host is not running: state=Stopped
	I0729 03:45:16.355086   10225 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-363000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-363000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000: exit status 7 (28.22ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-363000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000: exit status 7 (29.241958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-363000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-092000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-092000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (10.221186625s)

                                                
                                                
-- stdout --
	* [no-preload-092000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-092000" primary control-plane node in "no-preload-092000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-092000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:45:16.659590   10242 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:45:16.659724   10242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:16.659727   10242 out.go:304] Setting ErrFile to fd 2...
	I0729 03:45:16.659730   10242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:16.659866   10242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:45:16.660958   10242 out.go:298] Setting JSON to false
	I0729 03:45:16.677325   10242 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6285,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:45:16.677396   10242 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:45:16.681174   10242 out.go:177] * [no-preload-092000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:45:16.688180   10242 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:45:16.688215   10242 notify.go:220] Checking for updates...
	I0729 03:45:16.695150   10242 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:45:16.698187   10242 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:45:16.701237   10242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:45:16.704168   10242 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:45:16.707234   10242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:45:16.710405   10242 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:45:16.710468   10242 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:45:16.710521   10242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:45:16.714217   10242 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:45:16.721069   10242 start.go:297] selected driver: qemu2
	I0729 03:45:16.721075   10242 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:45:16.721082   10242 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:45:16.723419   10242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:45:16.727158   10242 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:45:16.730256   10242 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:45:16.730285   10242 cni.go:84] Creating CNI manager for ""
	I0729 03:45:16.730292   10242 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:45:16.730301   10242 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:45:16.730327   10242 start.go:340] cluster config:
	{Name:no-preload-092000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:45:16.733971   10242 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:16.742175   10242 out.go:177] * Starting "no-preload-092000" primary control-plane node in "no-preload-092000" cluster
	I0729 03:45:16.746041   10242 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 03:45:16.746128   10242 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/no-preload-092000/config.json ...
	I0729 03:45:16.746144   10242 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/no-preload-092000/config.json: {Name:mkc1c88b4bc73ca174a81b61a6635ba7cdb33f87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:45:16.746146   10242 cache.go:107] acquiring lock: {Name:mk44c8e8bff79c2c693a53299c9699d4b770669c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:16.746149   10242 cache.go:107] acquiring lock: {Name:mk7a62c3289f3135e51947a2ffc8375ebd524608 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:16.746220   10242 cache.go:115] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 03:45:16.746230   10242 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.958µs
	I0729 03:45:16.746247   10242 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 03:45:16.746254   10242 cache.go:107] acquiring lock: {Name:mk015b1faf70f77e84551cdc40aec00f7613d877 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:16.746242   10242 cache.go:107] acquiring lock: {Name:mk959e429f2ad0ca16ede710fe89c79ff296eaca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:16.746319   10242 cache.go:107] acquiring lock: {Name:mk44e5fb1a436cbaedba30e31c01ffea94113850 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:16.746357   10242 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 03:45:16.746369   10242 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 03:45:16.746353   10242 cache.go:107] acquiring lock: {Name:mke0d7302138b04b67e5a874a5e73546bbf40e49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:16.746411   10242 start.go:360] acquireMachinesLock for no-preload-092000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:16.746453   10242 start.go:364] duration metric: took 36.875µs to acquireMachinesLock for "no-preload-092000"
	I0729 03:45:16.746489   10242 cache.go:107] acquiring lock: {Name:mk6c974bec64ab8b9abba37e1ae08ed0fa9f00b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:16.746519   10242 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 03:45:16.746557   10242 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 03:45:16.746465   10242 start.go:93] Provisioning new machine with config: &{Name:no-preload-092000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:45:16.746577   10242 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:45:16.746589   10242 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 03:45:16.746609   10242 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 03:45:16.746607   10242 cache.go:107] acquiring lock: {Name:mkbb5d44fd5f5e79fbdbdbf5f2ff088de707675a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:16.746712   10242 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 03:45:16.754158   10242 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:45:16.757478   10242 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 03:45:16.757516   10242 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 03:45:16.757624   10242 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 03:45:16.759164   10242 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 03:45:16.759229   10242 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 03:45:16.759405   10242 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 03:45:16.759947   10242 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 03:45:16.771142   10242 start.go:159] libmachine.API.Create for "no-preload-092000" (driver="qemu2")
	I0729 03:45:16.771166   10242 client.go:168] LocalClient.Create starting
	I0729 03:45:16.771286   10242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:45:16.771328   10242 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:16.771340   10242 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:16.771391   10242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:45:16.771419   10242 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:16.771427   10242 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:16.771796   10242 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:45:16.925357   10242 main.go:141] libmachine: Creating SSH key...
	I0729 03:45:17.076843   10242 main.go:141] libmachine: Creating Disk image...
	I0729 03:45:17.076863   10242 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:45:17.077113   10242 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/disk.qcow2
	I0729 03:45:17.086869   10242 main.go:141] libmachine: STDOUT: 
	I0729 03:45:17.086884   10242 main.go:141] libmachine: STDERR: 
	I0729 03:45:17.086932   10242 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/disk.qcow2 +20000M
	I0729 03:45:17.095648   10242 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:45:17.095671   10242 main.go:141] libmachine: STDERR: 
	I0729 03:45:17.095682   10242 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/disk.qcow2
	I0729 03:45:17.095686   10242 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:45:17.095700   10242 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:17.095729   10242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:ab:a5:12:ef:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/disk.qcow2
	I0729 03:45:17.097615   10242 main.go:141] libmachine: STDOUT: 
	I0729 03:45:17.097633   10242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:17.097651   10242 client.go:171] duration metric: took 326.488042ms to LocalClient.Create
	I0729 03:45:17.134374   10242 cache.go:162] opening:  /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 03:45:17.144502   10242 cache.go:162] opening:  /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 03:45:17.150016   10242 cache.go:162] opening:  /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 03:45:17.170766   10242 cache.go:162] opening:  /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0729 03:45:17.179025   10242 cache.go:162] opening:  /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0729 03:45:17.250302   10242 cache.go:162] opening:  /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 03:45:17.267901   10242 cache.go:162] opening:  /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 03:45:17.342986   10242 cache.go:157] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 03:45:17.343014   10242 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 596.700166ms
	I0729 03:45:17.343030   10242 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 03:45:19.097745   10242 start.go:128] duration metric: took 2.35119425s to createHost
	I0729 03:45:19.097781   10242 start.go:83] releasing machines lock for "no-preload-092000", held for 2.3513695s
	W0729 03:45:19.097805   10242 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:19.113433   10242 out.go:177] * Deleting "no-preload-092000" in qemu2 ...
	W0729 03:45:19.122844   10242 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:19.122851   10242 start.go:729] Will try again in 5 seconds ...
	I0729 03:45:20.134826   10242 cache.go:157] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 03:45:20.134847   10242 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 3.388698333s
	I0729 03:45:20.134856   10242 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 03:45:20.245823   10242 cache.go:157] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 03:45:20.245838   10242 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.499652s
	I0729 03:45:20.245849   10242 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 03:45:20.774656   10242 cache.go:157] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 03:45:20.774672   10242 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 4.028613916s
	I0729 03:45:20.774679   10242 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 03:45:20.920200   10242 cache.go:157] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 03:45:20.920212   10242 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 4.173993542s
	I0729 03:45:20.920220   10242 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 03:45:20.936809   10242 cache.go:157] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 03:45:20.936818   10242 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 4.190449583s
	I0729 03:45:20.936824   10242 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 03:45:24.124927   10242 start.go:360] acquireMachinesLock for no-preload-092000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:24.125305   10242 start.go:364] duration metric: took 308.417µs to acquireMachinesLock for "no-preload-092000"
	I0729 03:45:24.125419   10242 start.go:93] Provisioning new machine with config: &{Name:no-preload-092000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:45:24.125621   10242 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:45:24.134010   10242 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:45:24.171995   10242 start.go:159] libmachine.API.Create for "no-preload-092000" (driver="qemu2")
	I0729 03:45:24.172042   10242 client.go:168] LocalClient.Create starting
	I0729 03:45:24.172178   10242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:45:24.172242   10242 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:24.172261   10242 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:24.172337   10242 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:45:24.172378   10242 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:24.172395   10242 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:24.172895   10242 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:45:24.327244   10242 main.go:141] libmachine: Creating SSH key...
	I0729 03:45:24.790120   10242 main.go:141] libmachine: Creating Disk image...
	I0729 03:45:24.790135   10242 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:45:24.790391   10242 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/disk.qcow2
	I0729 03:45:24.800447   10242 main.go:141] libmachine: STDOUT: 
	I0729 03:45:24.800466   10242 main.go:141] libmachine: STDERR: 
	I0729 03:45:24.800529   10242 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/disk.qcow2 +20000M
	I0729 03:45:24.808804   10242 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:45:24.808817   10242 main.go:141] libmachine: STDERR: 
	I0729 03:45:24.808870   10242 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/disk.qcow2
	I0729 03:45:24.808875   10242 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:45:24.808883   10242 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:24.808917   10242 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:c0:d5:38:71:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/disk.qcow2
	I0729 03:45:24.810634   10242 main.go:141] libmachine: STDOUT: 
	I0729 03:45:24.810647   10242 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:24.810659   10242 client.go:171] duration metric: took 638.621083ms to LocalClient.Create
	I0729 03:45:24.952518   10242 cache.go:157] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 03:45:24.952542   10242 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 8.206444708s
	I0729 03:45:24.952556   10242 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 03:45:24.952573   10242 cache.go:87] Successfully saved all images to host disk.
	I0729 03:45:26.812931   10242 start.go:128] duration metric: took 2.687279042s to createHost
	I0729 03:45:26.813052   10242 start.go:83] releasing machines lock for "no-preload-092000", held for 2.687781958s
	W0729 03:45:26.813428   10242 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-092000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-092000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:26.826030   10242 out.go:177] 
	W0729 03:45:26.831185   10242 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:45:26.831213   10242 out.go:239] * 
	* 
	W0729 03:45:26.832956   10242 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:45:26.841875   10242 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-092000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000: exit status 7 (49.633208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-092000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-092000 create -f testdata/busybox.yaml: exit status 1 (29.595083ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-092000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-092000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000: exit status 7 (28.956833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-092000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000: exit status 7 (28.517959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-092000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-092000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-092000 describe deploy/metrics-server -n kube-system: exit status 1 (26.807125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-092000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-092000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000: exit status 7 (28.96175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-092000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-092000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.173739667s)

                                                
                                                
-- stdout --
	* [no-preload-092000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-092000" primary control-plane node in "no-preload-092000" cluster
	* Restarting existing qemu2 VM for "no-preload-092000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-092000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:45:29.253094   10324 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:45:29.253223   10324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:29.253226   10324 out.go:304] Setting ErrFile to fd 2...
	I0729 03:45:29.253228   10324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:29.253365   10324 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:45:29.254354   10324 out.go:298] Setting JSON to false
	I0729 03:45:29.270768   10324 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6298,"bootTime":1722243631,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:45:29.270852   10324 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:45:29.275590   10324 out.go:177] * [no-preload-092000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:45:29.282618   10324 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:45:29.282656   10324 notify.go:220] Checking for updates...
	I0729 03:45:29.290532   10324 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:45:29.293534   10324 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:45:29.296458   10324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:45:29.299580   10324 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:45:29.302573   10324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:45:29.305677   10324 config.go:182] Loaded profile config "no-preload-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 03:45:29.305913   10324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:45:29.309576   10324 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:45:29.316538   10324 start.go:297] selected driver: qemu2
	I0729 03:45:29.316546   10324 start.go:901] validating driver "qemu2" against &{Name:no-preload-092000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-092000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:45:29.316610   10324 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:45:29.319090   10324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:45:29.319117   10324 cni.go:84] Creating CNI manager for ""
	I0729 03:45:29.319125   10324 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:45:29.319142   10324 start.go:340] cluster config:
	{Name:no-preload-092000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-092000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:45:29.322748   10324 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:29.330360   10324 out.go:177] * Starting "no-preload-092000" primary control-plane node in "no-preload-092000" cluster
	I0729 03:45:29.334476   10324 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 03:45:29.334533   10324 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/no-preload-092000/config.json ...
	I0729 03:45:29.334570   10324 cache.go:107] acquiring lock: {Name:mk44c8e8bff79c2c693a53299c9699d4b770669c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:29.334575   10324 cache.go:107] acquiring lock: {Name:mk7a62c3289f3135e51947a2ffc8375ebd524608 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:29.334583   10324 cache.go:107] acquiring lock: {Name:mke0d7302138b04b67e5a874a5e73546bbf40e49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:29.334648   10324 cache.go:115] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 03:45:29.334651   10324 cache.go:115] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 03:45:29.334657   10324 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 89.333µs
	I0729 03:45:29.334664   10324 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 03:45:29.334660   10324 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 90.791µs
	I0729 03:45:29.334669   10324 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 03:45:29.334671   10324 cache.go:107] acquiring lock: {Name:mkbb5d44fd5f5e79fbdbdbf5f2ff088de707675a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:29.334678   10324 cache.go:107] acquiring lock: {Name:mk44e5fb1a436cbaedba30e31c01ffea94113850 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:29.334689   10324 cache.go:107] acquiring lock: {Name:mk959e429f2ad0ca16ede710fe89c79ff296eaca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:29.334690   10324 cache.go:107] acquiring lock: {Name:mk015b1faf70f77e84551cdc40aec00f7613d877 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:29.334694   10324 cache.go:107] acquiring lock: {Name:mk6c974bec64ab8b9abba37e1ae08ed0fa9f00b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:29.334704   10324 cache.go:115] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 03:45:29.334716   10324 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 142.042µs
	I0729 03:45:29.334724   10324 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 03:45:29.334770   10324 cache.go:115] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 03:45:29.334776   10324 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 105.625µs
	I0729 03:45:29.334779   10324 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 03:45:29.334780   10324 cache.go:115] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 03:45:29.334780   10324 cache.go:115] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 03:45:29.334785   10324 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 98.042µs
	I0729 03:45:29.334789   10324 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 03:45:29.334787   10324 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 109.833µs
	I0729 03:45:29.334792   10324 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 03:45:29.334791   10324 cache.go:115] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 03:45:29.334804   10324 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 114µs
	I0729 03:45:29.334813   10324 cache.go:115] /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 03:45:29.334815   10324 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 03:45:29.334816   10324 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 126.5µs
	I0729 03:45:29.334820   10324 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 03:45:29.334824   10324 cache.go:87] Successfully saved all images to host disk.
	I0729 03:45:29.334994   10324 start.go:360] acquireMachinesLock for no-preload-092000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:29.335023   10324 start.go:364] duration metric: took 22.667µs to acquireMachinesLock for "no-preload-092000"
	I0729 03:45:29.335033   10324 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:45:29.335039   10324 fix.go:54] fixHost starting: 
	I0729 03:45:29.335163   10324 fix.go:112] recreateIfNeeded on no-preload-092000: state=Stopped err=<nil>
	W0729 03:45:29.335174   10324 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:45:29.342538   10324 out.go:177] * Restarting existing qemu2 VM for "no-preload-092000" ...
	I0729 03:45:29.346576   10324 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:29.346620   10324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:c0:d5:38:71:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/disk.qcow2
	I0729 03:45:29.348716   10324 main.go:141] libmachine: STDOUT: 
	I0729 03:45:29.348739   10324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:29.348766   10324 fix.go:56] duration metric: took 13.728625ms for fixHost
	I0729 03:45:29.348770   10324 start.go:83] releasing machines lock for "no-preload-092000", held for 13.743084ms
	W0729 03:45:29.348776   10324 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:45:29.348803   10324 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:29.348808   10324 start.go:729] Will try again in 5 seconds ...
	I0729 03:45:34.349248   10324 start.go:360] acquireMachinesLock for no-preload-092000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:34.349665   10324 start.go:364] duration metric: took 341.667µs to acquireMachinesLock for "no-preload-092000"
	I0729 03:45:34.349783   10324 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:45:34.349797   10324 fix.go:54] fixHost starting: 
	I0729 03:45:34.350316   10324 fix.go:112] recreateIfNeeded on no-preload-092000: state=Stopped err=<nil>
	W0729 03:45:34.350333   10324 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:45:34.353942   10324 out.go:177] * Restarting existing qemu2 VM for "no-preload-092000" ...
	I0729 03:45:34.359722   10324 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:34.359911   10324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:c0:d5:38:71:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/no-preload-092000/disk.qcow2
	I0729 03:45:34.368022   10324 main.go:141] libmachine: STDOUT: 
	I0729 03:45:34.368078   10324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:34.368164   10324 fix.go:56] duration metric: took 18.368625ms for fixHost
	I0729 03:45:34.368176   10324 start.go:83] releasing machines lock for "no-preload-092000", held for 18.496041ms
	W0729 03:45:34.368322   10324 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-092000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-092000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:34.375712   10324 out.go:177] 
	W0729 03:45:34.378753   10324 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:45:34.378771   10324 out.go:239] * 
	* 
	W0729 03:45:34.380615   10324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:45:34.387749   10324 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-092000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000: exit status 7 (57.346334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-092000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000: exit status 7 (31.434833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-092000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-092000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-092000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.52875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-092000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-092000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000: exit status 7 (29.306292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-092000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000: exit status 7 (28.964542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-092000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-092000 --alsologtostderr -v=1: exit status 83 (40.939ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-092000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-092000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:45:34.643409   10343 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:45:34.643554   10343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:34.643557   10343 out.go:304] Setting ErrFile to fd 2...
	I0729 03:45:34.643560   10343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:34.643687   10343 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:45:34.643921   10343 out.go:298] Setting JSON to false
	I0729 03:45:34.643926   10343 mustload.go:65] Loading cluster: no-preload-092000
	I0729 03:45:34.644119   10343 config.go:182] Loaded profile config "no-preload-092000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 03:45:34.648124   10343 out.go:177] * The control-plane node no-preload-092000 host is not running: state=Stopped
	I0729 03:45:34.651937   10343 out.go:177]   To start a cluster, run: "minikube start -p no-preload-092000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-092000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000: exit status 7 (28.379875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-092000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000: exit status 7 (28.822584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-092000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-606000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-606000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.911703541s)

                                                
                                                
-- stdout --
	* [embed-certs-606000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-606000" primary control-plane node in "embed-certs-606000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-606000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:45:34.958987   10360 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:45:34.959115   10360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:34.959119   10360 out.go:304] Setting ErrFile to fd 2...
	I0729 03:45:34.959121   10360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:34.959294   10360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:45:34.960377   10360 out.go:298] Setting JSON to false
	I0729 03:45:34.976521   10360 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6303,"bootTime":1722243631,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:45:34.976597   10360 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:45:34.980576   10360 out.go:177] * [embed-certs-606000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:45:34.990463   10360 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:45:34.990532   10360 notify.go:220] Checking for updates...
	I0729 03:45:34.998414   10360 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:45:35.001563   10360 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:45:35.004539   10360 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:45:35.007561   10360 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:45:35.010587   10360 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:45:35.013853   10360 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:45:35.013909   10360 config.go:182] Loaded profile config "stopped-upgrade-590000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 03:45:35.013952   10360 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:45:35.017500   10360 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:45:35.024525   10360 start.go:297] selected driver: qemu2
	I0729 03:45:35.024531   10360 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:45:35.024537   10360 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:45:35.026786   10360 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:45:35.029590   10360 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:45:35.033528   10360 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:45:35.033560   10360 cni.go:84] Creating CNI manager for ""
	I0729 03:45:35.033566   10360 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:45:35.033569   10360 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:45:35.033594   10360 start.go:340] cluster config:
	{Name:embed-certs-606000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:45:35.037060   10360 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:35.045512   10360 out.go:177] * Starting "embed-certs-606000" primary control-plane node in "embed-certs-606000" cluster
	I0729 03:45:35.049554   10360 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:45:35.049567   10360 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:45:35.049575   10360 cache.go:56] Caching tarball of preloaded images
	I0729 03:45:35.049626   10360 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:45:35.049630   10360 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:45:35.049686   10360 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/embed-certs-606000/config.json ...
	I0729 03:45:35.049697   10360 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/embed-certs-606000/config.json: {Name:mke5a42c521e3c24562e9f33745ebfaed96a979b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:45:35.050163   10360 start.go:360] acquireMachinesLock for embed-certs-606000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:35.050197   10360 start.go:364] duration metric: took 28.959µs to acquireMachinesLock for "embed-certs-606000"
	I0729 03:45:35.050208   10360 start.go:93] Provisioning new machine with config: &{Name:embed-certs-606000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:45:35.050236   10360 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:45:35.054484   10360 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:45:35.069377   10360 start.go:159] libmachine.API.Create for "embed-certs-606000" (driver="qemu2")
	I0729 03:45:35.069404   10360 client.go:168] LocalClient.Create starting
	I0729 03:45:35.069469   10360 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:45:35.069500   10360 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:35.069509   10360 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:35.069545   10360 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:45:35.069568   10360 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:35.069578   10360 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:35.069916   10360 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:45:35.228950   10360 main.go:141] libmachine: Creating SSH key...
	I0729 03:45:35.379359   10360 main.go:141] libmachine: Creating Disk image...
	I0729 03:45:35.379366   10360 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:45:35.379586   10360 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/disk.qcow2
	I0729 03:45:35.389538   10360 main.go:141] libmachine: STDOUT: 
	I0729 03:45:35.389555   10360 main.go:141] libmachine: STDERR: 
	I0729 03:45:35.389615   10360 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/disk.qcow2 +20000M
	I0729 03:45:35.397699   10360 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:45:35.397714   10360 main.go:141] libmachine: STDERR: 
	I0729 03:45:35.397730   10360 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/disk.qcow2
	I0729 03:45:35.397733   10360 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:45:35.397748   10360 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:35.397776   10360 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:82:60:10:02:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/disk.qcow2
	I0729 03:45:35.399470   10360 main.go:141] libmachine: STDOUT: 
	I0729 03:45:35.399484   10360 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:35.399505   10360 client.go:171] duration metric: took 330.103083ms to LocalClient.Create
	I0729 03:45:37.401630   10360 start.go:128] duration metric: took 2.351430459s to createHost
	I0729 03:45:37.401651   10360 start.go:83] releasing machines lock for "embed-certs-606000", held for 2.351488584s
	W0729 03:45:37.401669   10360 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:37.409340   10360 out.go:177] * Deleting "embed-certs-606000" in qemu2 ...
	W0729 03:45:37.419171   10360 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:37.419187   10360 start.go:729] Will try again in 5 seconds ...
	I0729 03:45:42.421281   10360 start.go:360] acquireMachinesLock for embed-certs-606000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:42.421695   10360 start.go:364] duration metric: took 335.583µs to acquireMachinesLock for "embed-certs-606000"
	I0729 03:45:42.421826   10360 start.go:93] Provisioning new machine with config: &{Name:embed-certs-606000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:45:42.422088   10360 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:45:42.438817   10360 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:45:42.490617   10360 start.go:159] libmachine.API.Create for "embed-certs-606000" (driver="qemu2")
	I0729 03:45:42.490663   10360 client.go:168] LocalClient.Create starting
	I0729 03:45:42.490824   10360 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:45:42.490889   10360 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:42.490907   10360 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:42.490966   10360 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:45:42.491009   10360 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:42.491023   10360 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:42.491512   10360 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:45:42.652697   10360 main.go:141] libmachine: Creating SSH key...
	I0729 03:45:42.773580   10360 main.go:141] libmachine: Creating Disk image...
	I0729 03:45:42.773585   10360 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:45:42.773796   10360 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/disk.qcow2
	I0729 03:45:42.783289   10360 main.go:141] libmachine: STDOUT: 
	I0729 03:45:42.783306   10360 main.go:141] libmachine: STDERR: 
	I0729 03:45:42.783361   10360 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/disk.qcow2 +20000M
	I0729 03:45:42.791198   10360 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:45:42.791215   10360 main.go:141] libmachine: STDERR: 
	I0729 03:45:42.791235   10360 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/disk.qcow2
	I0729 03:45:42.791239   10360 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:45:42.791249   10360 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:42.791279   10360 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:1e:51:b2:fd:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/disk.qcow2
	I0729 03:45:42.792982   10360 main.go:141] libmachine: STDOUT: 
	I0729 03:45:42.792996   10360 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:42.793009   10360 client.go:171] duration metric: took 302.347542ms to LocalClient.Create
	I0729 03:45:44.795167   10360 start.go:128] duration metric: took 2.373094292s to createHost
	I0729 03:45:44.795226   10360 start.go:83] releasing machines lock for "embed-certs-606000", held for 2.373554792s
	W0729 03:45:44.795538   10360 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-606000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-606000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:44.806989   10360 out.go:177] 
	W0729 03:45:44.816284   10360 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:45:44.816324   10360 out.go:239] * 
	* 
	W0729 03:45:44.819211   10360 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:45:44.830104   10360 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-606000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000: exit status 7 (65.427625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-503000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-503000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.864469291s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-503000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-503000" primary control-plane node in "default-k8s-diff-port-503000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-503000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:45:37.429015   10380 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:45:37.429133   10380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:37.429137   10380 out.go:304] Setting ErrFile to fd 2...
	I0729 03:45:37.429139   10380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:37.429258   10380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:45:37.430253   10380 out.go:298] Setting JSON to false
	I0729 03:45:37.446432   10380 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6306,"bootTime":1722243631,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:45:37.446503   10380 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:45:37.450443   10380 out.go:177] * [default-k8s-diff-port-503000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:45:37.464554   10380 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:45:37.464600   10380 notify.go:220] Checking for updates...
	I0729 03:45:37.472442   10380 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:45:37.476453   10380 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:45:37.479438   10380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:45:37.482476   10380 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:45:37.485395   10380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:45:37.488791   10380 config.go:182] Loaded profile config "embed-certs-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:45:37.488873   10380 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:45:37.488935   10380 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:45:37.493409   10380 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:45:37.500406   10380 start.go:297] selected driver: qemu2
	I0729 03:45:37.500411   10380 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:45:37.500415   10380 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:45:37.502704   10380 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:45:37.506517   10380 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:45:37.509550   10380 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:45:37.509570   10380 cni.go:84] Creating CNI manager for ""
	I0729 03:45:37.509578   10380 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:45:37.509583   10380 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:45:37.509611   10380 start.go:340] cluster config:
	{Name:default-k8s-diff-port-503000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-503000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:45:37.513567   10380 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:37.521425   10380 out.go:177] * Starting "default-k8s-diff-port-503000" primary control-plane node in "default-k8s-diff-port-503000" cluster
	I0729 03:45:37.525463   10380 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:45:37.525482   10380 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:45:37.525494   10380 cache.go:56] Caching tarball of preloaded images
	I0729 03:45:37.525571   10380 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:45:37.525577   10380 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:45:37.525646   10380 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/default-k8s-diff-port-503000/config.json ...
	I0729 03:45:37.525662   10380 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/default-k8s-diff-port-503000/config.json: {Name:mkf847edc73091c76dd3b02e5a5a241366c76294 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:45:37.525915   10380 start.go:360] acquireMachinesLock for default-k8s-diff-port-503000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:37.525955   10380 start.go:364] duration metric: took 30.875µs to acquireMachinesLock for "default-k8s-diff-port-503000"
	I0729 03:45:37.525968   10380 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-503000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-503000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:45:37.526004   10380 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:45:37.533450   10380 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:45:37.552302   10380 start.go:159] libmachine.API.Create for "default-k8s-diff-port-503000" (driver="qemu2")
	I0729 03:45:37.552325   10380 client.go:168] LocalClient.Create starting
	I0729 03:45:37.552387   10380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:45:37.552422   10380 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:37.552433   10380 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:37.552473   10380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:45:37.552498   10380 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:37.552506   10380 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:37.552949   10380 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:45:37.705063   10380 main.go:141] libmachine: Creating SSH key...
	I0729 03:45:37.869636   10380 main.go:141] libmachine: Creating Disk image...
	I0729 03:45:37.869642   10380 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:45:37.869870   10380 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/disk.qcow2
	I0729 03:45:37.879321   10380 main.go:141] libmachine: STDOUT: 
	I0729 03:45:37.879348   10380 main.go:141] libmachine: STDERR: 
	I0729 03:45:37.879402   10380 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/disk.qcow2 +20000M
	I0729 03:45:37.887300   10380 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:45:37.887316   10380 main.go:141] libmachine: STDERR: 
	I0729 03:45:37.887335   10380 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/disk.qcow2
	I0729 03:45:37.887342   10380 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:45:37.887352   10380 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:37.887377   10380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:5a:91:91:ad:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/disk.qcow2
	I0729 03:45:37.888986   10380 main.go:141] libmachine: STDOUT: 
	I0729 03:45:37.889000   10380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:37.889023   10380 client.go:171] duration metric: took 336.700125ms to LocalClient.Create
	I0729 03:45:39.891159   10380 start.go:128] duration metric: took 2.365179875s to createHost
	I0729 03:45:39.891237   10380 start.go:83] releasing machines lock for "default-k8s-diff-port-503000", held for 2.365317292s
	W0729 03:45:39.891350   10380 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:39.902538   10380 out.go:177] * Deleting "default-k8s-diff-port-503000" in qemu2 ...
	W0729 03:45:39.930303   10380 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:39.930328   10380 start.go:729] Will try again in 5 seconds ...
	I0729 03:45:44.932342   10380 start.go:360] acquireMachinesLock for default-k8s-diff-port-503000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:44.932431   10380 start.go:364] duration metric: took 63.5µs to acquireMachinesLock for "default-k8s-diff-port-503000"
	I0729 03:45:44.932469   10380 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-503000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-503000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:45:44.932508   10380 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:45:44.940407   10380 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:45:44.956412   10380 start.go:159] libmachine.API.Create for "default-k8s-diff-port-503000" (driver="qemu2")
	I0729 03:45:44.956440   10380 client.go:168] LocalClient.Create starting
	I0729 03:45:44.956493   10380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:45:44.956543   10380 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:44.956553   10380 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:44.956589   10380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:45:44.956605   10380 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:44.956610   10380 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:44.956932   10380 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:45:45.138084   10380 main.go:141] libmachine: Creating SSH key...
	I0729 03:45:45.204830   10380 main.go:141] libmachine: Creating Disk image...
	I0729 03:45:45.204838   10380 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:45:45.205022   10380 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/disk.qcow2
	I0729 03:45:45.214014   10380 main.go:141] libmachine: STDOUT: 
	I0729 03:45:45.214030   10380 main.go:141] libmachine: STDERR: 
	I0729 03:45:45.214076   10380 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/disk.qcow2 +20000M
	I0729 03:45:45.222049   10380 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:45:45.222065   10380 main.go:141] libmachine: STDERR: 
	I0729 03:45:45.222076   10380 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/disk.qcow2
	I0729 03:45:45.222086   10380 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:45:45.222096   10380 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:45.222124   10380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:33:34:d9:69:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/disk.qcow2
	I0729 03:45:45.223885   10380 main.go:141] libmachine: STDOUT: 
	I0729 03:45:45.223901   10380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:45.223914   10380 client.go:171] duration metric: took 267.475584ms to LocalClient.Create
	I0729 03:45:47.226112   10380 start.go:128] duration metric: took 2.293628458s to createHost
	I0729 03:45:47.226266   10380 start.go:83] releasing machines lock for "default-k8s-diff-port-503000", held for 2.293781083s
	W0729 03:45:47.226592   10380 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-503000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-503000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:47.236172   10380 out.go:177] 
	W0729 03:45:47.241204   10380 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:45:47.241230   10380 out.go:239] * 
	* 
	W0729 03:45:47.244174   10380 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:45:47.252209   10380 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-503000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000: exit status 7 (66.8125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-503000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-606000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-606000 create -f testdata/busybox.yaml: exit status 1 (32.132958ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-606000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-606000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000: exit status 7 (30.189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-606000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000: exit status 7 (33.485375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-606000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-606000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-606000 describe deploy/metrics-server -n kube-system: exit status 1 (30.183375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-606000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-606000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000: exit status 7 (31.302375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-503000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-503000 create -f testdata/busybox.yaml: exit status 1 (30.615708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-503000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-503000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000: exit status 7 (28.678208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-503000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000: exit status 7 (29.456ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-503000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-503000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-503000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-503000 describe deploy/metrics-server -n kube-system: exit status 1 (26.448667ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-503000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-503000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000: exit status 7 (29.096583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-503000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-606000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-606000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.18322075s)

                                                
                                                
-- stdout --
	* [embed-certs-606000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-606000" primary control-plane node in "embed-certs-606000" cluster
	* Restarting existing qemu2 VM for "embed-certs-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:45:48.981045   10454 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:45:48.981158   10454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:48.981161   10454 out.go:304] Setting ErrFile to fd 2...
	I0729 03:45:48.981164   10454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:48.981309   10454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:45:48.982252   10454 out.go:298] Setting JSON to false
	I0729 03:45:48.998575   10454 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6317,"bootTime":1722243631,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:45:48.998653   10454 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:45:49.003426   10454 out.go:177] * [embed-certs-606000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:45:49.010480   10454 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:45:49.010542   10454 notify.go:220] Checking for updates...
	I0729 03:45:49.018387   10454 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:45:49.019634   10454 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:45:49.022382   10454 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:45:49.025398   10454 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:45:49.028416   10454 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:45:49.031634   10454 config.go:182] Loaded profile config "embed-certs-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:45:49.031885   10454 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:45:49.036428   10454 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:45:49.043443   10454 start.go:297] selected driver: qemu2
	I0729 03:45:49.043449   10454 start.go:901] validating driver "qemu2" against &{Name:embed-certs-606000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:45:49.043497   10454 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:45:49.045872   10454 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:45:49.045908   10454 cni.go:84] Creating CNI manager for ""
	I0729 03:45:49.045914   10454 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:45:49.045934   10454 start.go:340] cluster config:
	{Name:embed-certs-606000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-606000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:45:49.049402   10454 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:49.057400   10454 out.go:177] * Starting "embed-certs-606000" primary control-plane node in "embed-certs-606000" cluster
	I0729 03:45:49.061395   10454 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:45:49.061411   10454 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:45:49.061421   10454 cache.go:56] Caching tarball of preloaded images
	I0729 03:45:49.061481   10454 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:45:49.061487   10454 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:45:49.061557   10454 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/embed-certs-606000/config.json ...
	I0729 03:45:49.062076   10454 start.go:360] acquireMachinesLock for embed-certs-606000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:49.062105   10454 start.go:364] duration metric: took 22.791µs to acquireMachinesLock for "embed-certs-606000"
	I0729 03:45:49.062115   10454 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:45:49.062120   10454 fix.go:54] fixHost starting: 
	I0729 03:45:49.062242   10454 fix.go:112] recreateIfNeeded on embed-certs-606000: state=Stopped err=<nil>
	W0729 03:45:49.062250   10454 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:45:49.069358   10454 out.go:177] * Restarting existing qemu2 VM for "embed-certs-606000" ...
	I0729 03:45:49.073352   10454 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:49.073402   10454 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:1e:51:b2:fd:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/disk.qcow2
	I0729 03:45:49.075560   10454 main.go:141] libmachine: STDOUT: 
	I0729 03:45:49.075582   10454 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:49.075609   10454 fix.go:56] duration metric: took 13.488833ms for fixHost
	I0729 03:45:49.075614   10454 start.go:83] releasing machines lock for "embed-certs-606000", held for 13.504792ms
	W0729 03:45:49.075620   10454 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:45:49.075658   10454 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:49.075663   10454 start.go:729] Will try again in 5 seconds ...
	I0729 03:45:54.077752   10454 start.go:360] acquireMachinesLock for embed-certs-606000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:54.078136   10454 start.go:364] duration metric: took 299.834µs to acquireMachinesLock for "embed-certs-606000"
	I0729 03:45:54.078260   10454 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:45:54.078282   10454 fix.go:54] fixHost starting: 
	I0729 03:45:54.079043   10454 fix.go:112] recreateIfNeeded on embed-certs-606000: state=Stopped err=<nil>
	W0729 03:45:54.079068   10454 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:45:54.088681   10454 out.go:177] * Restarting existing qemu2 VM for "embed-certs-606000" ...
	I0729 03:45:54.091720   10454 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:54.091951   10454 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:1e:51:b2:fd:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/embed-certs-606000/disk.qcow2
	I0729 03:45:54.101767   10454 main.go:141] libmachine: STDOUT: 
	I0729 03:45:54.101834   10454 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:54.101979   10454 fix.go:56] duration metric: took 23.6985ms for fixHost
	I0729 03:45:54.102004   10454 start.go:83] releasing machines lock for "embed-certs-606000", held for 23.848916ms
	W0729 03:45:54.102211   10454 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-606000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-606000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:54.108649   10454 out.go:177] 
	W0729 03:45:54.112731   10454 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:45:54.112753   10454 out.go:239] * 
	* 
	W0729 03:45:54.115668   10454 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:45:54.123750   10454 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-606000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000: exit status 7 (65.869375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-503000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-503000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.203079584s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-503000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-503000" primary control-plane node in "default-k8s-diff-port-503000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-503000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-503000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:45:50.995223   10475 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:45:50.995367   10475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:50.995371   10475 out.go:304] Setting ErrFile to fd 2...
	I0729 03:45:50.995373   10475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:50.995510   10475 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:45:50.996478   10475 out.go:298] Setting JSON to false
	I0729 03:45:51.012457   10475 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6320,"bootTime":1722243631,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:45:51.012528   10475 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:45:51.016824   10475 out.go:177] * [default-k8s-diff-port-503000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:45:51.019882   10475 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:45:51.019965   10475 notify.go:220] Checking for updates...
	I0729 03:45:51.027831   10475 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:45:51.030845   10475 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:45:51.034839   10475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:45:51.037865   10475 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:45:51.040832   10475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:45:51.044075   10475 config.go:182] Loaded profile config "default-k8s-diff-port-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:45:51.044347   10475 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:45:51.047811   10475 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:45:51.054799   10475 start.go:297] selected driver: qemu2
	I0729 03:45:51.054806   10475 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-503000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-503000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:45:51.054874   10475 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:45:51.057262   10475 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 03:45:51.057303   10475 cni.go:84] Creating CNI manager for ""
	I0729 03:45:51.057312   10475 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:45:51.057339   10475 start.go:340] cluster config:
	{Name:default-k8s-diff-port-503000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-503000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:45:51.060920   10475 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:51.068809   10475 out.go:177] * Starting "default-k8s-diff-port-503000" primary control-plane node in "default-k8s-diff-port-503000" cluster
	I0729 03:45:51.072845   10475 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:45:51.072862   10475 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:45:51.072875   10475 cache.go:56] Caching tarball of preloaded images
	I0729 03:45:51.072929   10475 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:45:51.072935   10475 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:45:51.072999   10475 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/default-k8s-diff-port-503000/config.json ...
	I0729 03:45:51.073532   10475 start.go:360] acquireMachinesLock for default-k8s-diff-port-503000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:51.073561   10475 start.go:364] duration metric: took 23µs to acquireMachinesLock for "default-k8s-diff-port-503000"
	I0729 03:45:51.073572   10475 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:45:51.073578   10475 fix.go:54] fixHost starting: 
	I0729 03:45:51.073698   10475 fix.go:112] recreateIfNeeded on default-k8s-diff-port-503000: state=Stopped err=<nil>
	W0729 03:45:51.073706   10475 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:45:51.077834   10475 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-503000" ...
	I0729 03:45:51.085758   10475 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:51.085804   10475 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:33:34:d9:69:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/disk.qcow2
	I0729 03:45:51.087848   10475 main.go:141] libmachine: STDOUT: 
	I0729 03:45:51.087870   10475 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:51.087898   10475 fix.go:56] duration metric: took 14.319458ms for fixHost
	I0729 03:45:51.087905   10475 start.go:83] releasing machines lock for "default-k8s-diff-port-503000", held for 14.339583ms
	W0729 03:45:51.087911   10475 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:45:51.087962   10475 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:51.087967   10475 start.go:729] Will try again in 5 seconds ...
	I0729 03:45:56.090125   10475 start.go:360] acquireMachinesLock for default-k8s-diff-port-503000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:57.102294   10475 start.go:364] duration metric: took 1.012047292s to acquireMachinesLock for "default-k8s-diff-port-503000"
	I0729 03:45:57.102406   10475 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:45:57.102428   10475 fix.go:54] fixHost starting: 
	I0729 03:45:57.103185   10475 fix.go:112] recreateIfNeeded on default-k8s-diff-port-503000: state=Stopped err=<nil>
	W0729 03:45:57.103212   10475 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:45:57.108781   10475 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-503000" ...
	I0729 03:45:57.121761   10475 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:57.121988   10475 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:33:34:d9:69:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/default-k8s-diff-port-503000/disk.qcow2
	I0729 03:45:57.131968   10475 main.go:141] libmachine: STDOUT: 
	I0729 03:45:57.132021   10475 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:57.132094   10475 fix.go:56] duration metric: took 29.672125ms for fixHost
	I0729 03:45:57.132121   10475 start.go:83] releasing machines lock for "default-k8s-diff-port-503000", held for 29.779542ms
	W0729 03:45:57.132329   10475 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-503000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-503000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:57.139617   10475 out.go:177] 
	W0729 03:45:57.143758   10475 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:45:57.143778   10475 out.go:239] * 
	* 
	W0729 03:45:57.145634   10475 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:45:57.157757   10475 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-503000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000: exit status 7 (60.879709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-503000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-606000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000: exit status 7 (32.150458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-606000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-606000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-606000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.208667ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-606000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-606000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000: exit status 7 (28.467292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-606000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000: exit status 7 (28.934625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-606000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-606000 --alsologtostderr -v=1: exit status 83 (40.638625ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-606000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:45:54.389415   10494 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:45:54.389572   10494 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:54.389579   10494 out.go:304] Setting ErrFile to fd 2...
	I0729 03:45:54.389581   10494 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:54.389720   10494 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:45:54.389941   10494 out.go:298] Setting JSON to false
	I0729 03:45:54.389948   10494 mustload.go:65] Loading cluster: embed-certs-606000
	I0729 03:45:54.390150   10494 config.go:182] Loaded profile config "embed-certs-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:45:54.393731   10494 out.go:177] * The control-plane node embed-certs-606000 host is not running: state=Stopped
	I0729 03:45:54.397771   10494 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-606000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-606000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000: exit status 7 (28.458042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-606000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000: exit status 7 (28.626417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-892000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-892000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.928927042s)

                                                
                                                
-- stdout --
	* [newest-cni-892000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-892000" primary control-plane node in "newest-cni-892000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-892000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:45:54.704558   10511 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:45:54.704708   10511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:54.704711   10511 out.go:304] Setting ErrFile to fd 2...
	I0729 03:45:54.704714   10511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:54.704847   10511 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:45:54.705905   10511 out.go:298] Setting JSON to false
	I0729 03:45:54.722020   10511 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6323,"bootTime":1722243631,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:45:54.722100   10511 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:45:54.725848   10511 out.go:177] * [newest-cni-892000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:45:54.732797   10511 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:45:54.732853   10511 notify.go:220] Checking for updates...
	I0729 03:45:54.738837   10511 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:45:54.741799   10511 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:45:54.744794   10511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:45:54.747805   10511 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:45:54.749240   10511 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:45:54.753069   10511 config.go:182] Loaded profile config "default-k8s-diff-port-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:45:54.753128   10511 config.go:182] Loaded profile config "multinode-242000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:45:54.753197   10511 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:45:54.757736   10511 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 03:45:54.762804   10511 start.go:297] selected driver: qemu2
	I0729 03:45:54.762811   10511 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:45:54.762818   10511 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:45:54.765172   10511 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 03:45:54.765194   10511 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 03:45:54.771636   10511 out.go:177] * Automatically selected the socket_vmnet network
	I0729 03:45:54.774820   10511 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 03:45:54.774833   10511 cni.go:84] Creating CNI manager for ""
	I0729 03:45:54.774839   10511 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:45:54.774843   10511 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:45:54.774871   10511 start.go:340] cluster config:
	{Name:newest-cni-892000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:45:54.778633   10511 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:45:54.787760   10511 out.go:177] * Starting "newest-cni-892000" primary control-plane node in "newest-cni-892000" cluster
	I0729 03:45:54.791821   10511 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 03:45:54.791836   10511 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 03:45:54.791846   10511 cache.go:56] Caching tarball of preloaded images
	I0729 03:45:54.791940   10511 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:45:54.791961   10511 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 03:45:54.792034   10511 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/newest-cni-892000/config.json ...
	I0729 03:45:54.792047   10511 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/newest-cni-892000/config.json: {Name:mk1b3e0f71d522e51226ab7191aba6dc8e149013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:45:54.792279   10511 start.go:360] acquireMachinesLock for newest-cni-892000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:45:54.792332   10511 start.go:364] duration metric: took 46.833µs to acquireMachinesLock for "newest-cni-892000"
	I0729 03:45:54.792343   10511 start.go:93] Provisioning new machine with config: &{Name:newest-cni-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:45:54.792388   10511 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:45:54.800726   10511 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:45:54.818674   10511 start.go:159] libmachine.API.Create for "newest-cni-892000" (driver="qemu2")
	I0729 03:45:54.818698   10511 client.go:168] LocalClient.Create starting
	I0729 03:45:54.818764   10511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:45:54.818793   10511 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:54.818802   10511 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:54.818842   10511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:45:54.818866   10511 main.go:141] libmachine: Decoding PEM data...
	I0729 03:45:54.818871   10511 main.go:141] libmachine: Parsing certificate...
	I0729 03:45:54.819319   10511 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:45:54.968043   10511 main.go:141] libmachine: Creating SSH key...
	I0729 03:45:55.081118   10511 main.go:141] libmachine: Creating Disk image...
	I0729 03:45:55.081123   10511 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:45:55.081326   10511 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/disk.qcow2
	I0729 03:45:55.090438   10511 main.go:141] libmachine: STDOUT: 
	I0729 03:45:55.090453   10511 main.go:141] libmachine: STDERR: 
	I0729 03:45:55.090501   10511 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/disk.qcow2 +20000M
	I0729 03:45:55.098282   10511 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:45:55.098297   10511 main.go:141] libmachine: STDERR: 
	I0729 03:45:55.098316   10511 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/disk.qcow2
	I0729 03:45:55.098324   10511 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:45:55.098337   10511 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:45:55.098368   10511 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:e6:cc:b3:42:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/disk.qcow2
	I0729 03:45:55.099925   10511 main.go:141] libmachine: STDOUT: 
	I0729 03:45:55.099940   10511 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:45:55.099958   10511 client.go:171] duration metric: took 281.257083ms to LocalClient.Create
	I0729 03:45:57.102086   10511 start.go:128] duration metric: took 2.30972s to createHost
	I0729 03:45:57.102142   10511 start.go:83] releasing machines lock for "newest-cni-892000", held for 2.309844917s
	W0729 03:45:57.102215   10511 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:57.117721   10511 out.go:177] * Deleting "newest-cni-892000" in qemu2 ...
	W0729 03:45:57.170245   10511 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:45:57.170320   10511 start.go:729] Will try again in 5 seconds ...
	I0729 03:46:02.172522   10511 start.go:360] acquireMachinesLock for newest-cni-892000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:46:02.173027   10511 start.go:364] duration metric: took 415.75µs to acquireMachinesLock for "newest-cni-892000"
	I0729 03:46:02.173177   10511 start.go:93] Provisioning new machine with config: &{Name:newest-cni-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 03:46:02.173495   10511 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 03:46:02.176165   10511 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 03:46:02.225301   10511 start.go:159] libmachine.API.Create for "newest-cni-892000" (driver="qemu2")
	I0729 03:46:02.225357   10511 client.go:168] LocalClient.Create starting
	I0729 03:46:02.225487   10511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/ca.pem
	I0729 03:46:02.225566   10511 main.go:141] libmachine: Decoding PEM data...
	I0729 03:46:02.225586   10511 main.go:141] libmachine: Parsing certificate...
	I0729 03:46:02.225649   10511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-6349/.minikube/certs/cert.pem
	I0729 03:46:02.225693   10511 main.go:141] libmachine: Decoding PEM data...
	I0729 03:46:02.225708   10511 main.go:141] libmachine: Parsing certificate...
	I0729 03:46:02.226289   10511 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 03:46:02.384482   10511 main.go:141] libmachine: Creating SSH key...
	I0729 03:46:02.537646   10511 main.go:141] libmachine: Creating Disk image...
	I0729 03:46:02.537652   10511 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 03:46:02.537896   10511 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/disk.qcow2.raw /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/disk.qcow2
	I0729 03:46:02.547745   10511 main.go:141] libmachine: STDOUT: 
	I0729 03:46:02.547763   10511 main.go:141] libmachine: STDERR: 
	I0729 03:46:02.547810   10511 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/disk.qcow2 +20000M
	I0729 03:46:02.555718   10511 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 03:46:02.555733   10511 main.go:141] libmachine: STDERR: 
	I0729 03:46:02.555743   10511 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/disk.qcow2
	I0729 03:46:02.555748   10511 main.go:141] libmachine: Starting QEMU VM...
	I0729 03:46:02.555762   10511 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:46:02.555798   10511 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:2c:05:0f:cd:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/disk.qcow2
	I0729 03:46:02.557434   10511 main.go:141] libmachine: STDOUT: 
	I0729 03:46:02.557450   10511 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:46:02.557461   10511 client.go:171] duration metric: took 332.105542ms to LocalClient.Create
	I0729 03:46:04.559588   10511 start.go:128] duration metric: took 2.386108667s to createHost
	I0729 03:46:04.559703   10511 start.go:83] releasing machines lock for "newest-cni-892000", held for 2.386667709s
	W0729 03:46:04.560151   10511 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-892000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-892000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:46:04.573789   10511 out.go:177] 
	W0729 03:46:04.577760   10511 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:46:04.577795   10511 out.go:239] * 
	* 
	W0729 03:46:04.589102   10511 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:46:04.592811   10511 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-892000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-892000 -n newest-cni-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-892000 -n newest-cni-892000: exit status 7 (66.13825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-503000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000: exit status 7 (31.70625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-503000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-503000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-503000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-503000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.823042ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-503000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-503000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000: exit status 7 (29.032834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-503000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-503000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000: exit status 7 (28.881959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-503000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-503000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-503000 --alsologtostderr -v=1: exit status 83 (44.788791ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-503000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-503000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:45:57.415758   10533 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:45:57.415905   10533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:57.415908   10533 out.go:304] Setting ErrFile to fd 2...
	I0729 03:45:57.415911   10533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:45:57.416058   10533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:45:57.416265   10533 out.go:298] Setting JSON to false
	I0729 03:45:57.416272   10533 mustload.go:65] Loading cluster: default-k8s-diff-port-503000
	I0729 03:45:57.416460   10533 config.go:182] Loaded profile config "default-k8s-diff-port-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:45:57.421331   10533 out.go:177] * The control-plane node default-k8s-diff-port-503000 host is not running: state=Stopped
	I0729 03:45:57.428520   10533 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-503000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-503000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000: exit status 7 (28.756083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-503000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000: exit status 7 (28.895875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-503000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-892000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-892000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.184048459s)

                                                
                                                
-- stdout --
	* [newest-cni-892000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-892000" primary control-plane node in "newest-cni-892000" cluster
	* Restarting existing qemu2 VM for "newest-cni-892000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-892000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:46:08.042265   10580 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:46:08.042409   10580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:46:08.042412   10580 out.go:304] Setting ErrFile to fd 2...
	I0729 03:46:08.042415   10580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:46:08.042531   10580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:46:08.043554   10580 out.go:298] Setting JSON to false
	I0729 03:46:08.059630   10580 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6337,"bootTime":1722243631,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:46:08.059701   10580 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:46:08.064604   10580 out.go:177] * [newest-cni-892000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:46:08.071786   10580 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:46:08.071844   10580 notify.go:220] Checking for updates...
	I0729 03:46:08.078790   10580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:46:08.081794   10580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:46:08.084777   10580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:46:08.087760   10580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:46:08.090782   10580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:46:08.092444   10580 config.go:182] Loaded profile config "newest-cni-892000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 03:46:08.092704   10580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:46:08.096703   10580 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:46:08.103581   10580 start.go:297] selected driver: qemu2
	I0729 03:46:08.103592   10580 start.go:901] validating driver "qemu2" against &{Name:newest-cni-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:46:08.103662   10580 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:46:08.105951   10580 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 03:46:08.105971   10580 cni.go:84] Creating CNI manager for ""
	I0729 03:46:08.105978   10580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:46:08.106000   10580 start.go:340] cluster config:
	{Name:newest-cni-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-892000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:46:08.109565   10580 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:46:08.116771   10580 out.go:177] * Starting "newest-cni-892000" primary control-plane node in "newest-cni-892000" cluster
	I0729 03:46:08.120868   10580 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 03:46:08.120883   10580 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 03:46:08.120892   10580 cache.go:56] Caching tarball of preloaded images
	I0729 03:46:08.120955   10580 preload.go:172] Found /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 03:46:08.120961   10580 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 03:46:08.121022   10580 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/newest-cni-892000/config.json ...
	I0729 03:46:08.121545   10580 start.go:360] acquireMachinesLock for newest-cni-892000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:46:08.121577   10580 start.go:364] duration metric: took 22.292µs to acquireMachinesLock for "newest-cni-892000"
	I0729 03:46:08.121587   10580 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:46:08.121592   10580 fix.go:54] fixHost starting: 
	I0729 03:46:08.121707   10580 fix.go:112] recreateIfNeeded on newest-cni-892000: state=Stopped err=<nil>
	W0729 03:46:08.121715   10580 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:46:08.125778   10580 out.go:177] * Restarting existing qemu2 VM for "newest-cni-892000" ...
	I0729 03:46:08.133794   10580 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:46:08.133826   10580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:2c:05:0f:cd:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/disk.qcow2
	I0729 03:46:08.135718   10580 main.go:141] libmachine: STDOUT: 
	I0729 03:46:08.135736   10580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:46:08.135765   10580 fix.go:56] duration metric: took 14.17275ms for fixHost
	I0729 03:46:08.135769   10580 start.go:83] releasing machines lock for "newest-cni-892000", held for 14.187709ms
	W0729 03:46:08.135775   10580 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:46:08.135813   10580 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:46:08.135817   10580 start.go:729] Will try again in 5 seconds ...
	I0729 03:46:13.137881   10580 start.go:360] acquireMachinesLock for newest-cni-892000: {Name:mkd8d4c96737aa60e4ebb8043ddd64b3d26ee5d6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 03:46:13.138258   10580 start.go:364] duration metric: took 277.084µs to acquireMachinesLock for "newest-cni-892000"
	I0729 03:46:13.138423   10580 start.go:96] Skipping create...Using existing machine configuration
	I0729 03:46:13.138443   10580 fix.go:54] fixHost starting: 
	I0729 03:46:13.139099   10580 fix.go:112] recreateIfNeeded on newest-cni-892000: state=Stopped err=<nil>
	W0729 03:46:13.139126   10580 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 03:46:13.149342   10580 out.go:177] * Restarting existing qemu2 VM for "newest-cni-892000" ...
	I0729 03:46:13.152559   10580 qemu.go:418] Using hvf for hardware acceleration
	I0729 03:46:13.152838   10580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:2c:05:0f:cd:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19337-6349/.minikube/machines/newest-cni-892000/disk.qcow2
	I0729 03:46:13.161690   10580 main.go:141] libmachine: STDOUT: 
	I0729 03:46:13.161757   10580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 03:46:13.161837   10580 fix.go:56] duration metric: took 23.394125ms for fixHost
	I0729 03:46:13.161853   10580 start.go:83] releasing machines lock for "newest-cni-892000", held for 23.530667ms
	W0729 03:46:13.162018   10580 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-892000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-892000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 03:46:13.170473   10580 out.go:177] 
	W0729 03:46:13.174498   10580 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 03:46:13.174569   10580 out.go:239] * 
	* 
	W0729 03:46:13.176990   10580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:46:13.185491   10580 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-892000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-892000 -n newest-cni-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-892000 -n newest-cni-892000: exit status 7 (70.088458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-892000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-892000 -n newest-cni-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-892000 -n newest-cni-892000: exit status 7 (30.060625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-892000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-892000 --alsologtostderr -v=1: exit status 83 (41.632583ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-892000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-892000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:46:13.370110   10594 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:46:13.370273   10594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:46:13.370276   10594 out.go:304] Setting ErrFile to fd 2...
	I0729 03:46:13.370279   10594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:46:13.370411   10594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:46:13.370624   10594 out.go:298] Setting JSON to false
	I0729 03:46:13.370630   10594 mustload.go:65] Loading cluster: newest-cni-892000
	I0729 03:46:13.370817   10594 config.go:182] Loaded profile config "newest-cni-892000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 03:46:13.375314   10594 out.go:177] * The control-plane node newest-cni-892000 host is not running: state=Stopped
	I0729 03:46:13.379204   10594 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-892000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-892000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-892000 -n newest-cni-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-892000 -n newest-cni-892000: exit status 7 (29.73525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-892000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-892000 -n newest-cni-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-892000 -n newest-cni-892000: exit status 7 (30.549334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-892000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 17.33
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 9.81
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.28
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 10.54
48 TestErrorSpam/start 0.39
49 TestErrorSpam/status 0.09
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 9.81
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.66
64 TestFunctional/serial/CacheCmd/cache/add_local 1.19
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.22
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.09
102 TestFunctional/parallel/License 0.22
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 1.87
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.1
135 TestFunctional/parallel/ProfileCmd/profile_list 0.08
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_echo-server_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 1.97
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.2
202 TestMainNoArgs 0.03
249 TestStoppedBinaryUpgrade/Setup 0.93
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
266 TestNoKubernetes/serial/ProfileList 31.45
267 TestNoKubernetes/serial/Stop 1.88
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
281 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
284 TestStartStop/group/old-k8s-version/serial/Stop 2.93
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
295 TestStartStop/group/no-preload/serial/Stop 2.03
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
308 TestStartStop/group/embed-certs/serial/Stop 3.69
311 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.31
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
328 TestStartStop/group/newest-cni/serial/Stop 3.16
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-462000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-462000: exit status 85 (96.897917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-462000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |          |
	|         | -p download-only-462000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 03:19:19
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 03:19:19.727600    6845 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:19:19.727797    6845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:19:19.727801    6845 out.go:304] Setting ErrFile to fd 2...
	I0729 03:19:19.727804    6845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:19:19.727921    6845 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	W0729 03:19:19.728006    6845 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19337-6349/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19337-6349/.minikube/config/config.json: no such file or directory
	I0729 03:19:19.729346    6845 out.go:298] Setting JSON to true
	I0729 03:19:19.747044    6845 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4728,"bootTime":1722243631,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:19:19.747119    6845 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:19:19.752719    6845 out.go:97] [download-only-462000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:19:19.752870    6845 notify.go:220] Checking for updates...
	W0729 03:19:19.752919    6845 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 03:19:19.755701    6845 out.go:169] MINIKUBE_LOCATION=19337
	I0729 03:19:19.758728    6845 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:19:19.762972    6845 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:19:19.766547    6845 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:19:19.770718    6845 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	W0729 03:19:19.775268    6845 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 03:19:19.775456    6845 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:19:19.778703    6845 out.go:97] Using the qemu2 driver based on user configuration
	I0729 03:19:19.778721    6845 start.go:297] selected driver: qemu2
	I0729 03:19:19.778733    6845 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:19:19.778794    6845 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:19:19.781708    6845 out.go:169] Automatically selected the socket_vmnet network
	I0729 03:19:19.787916    6845 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 03:19:19.788014    6845 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 03:19:19.788062    6845 cni.go:84] Creating CNI manager for ""
	I0729 03:19:19.788079    6845 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 03:19:19.788124    6845 start.go:340] cluster config:
	{Name:download-only-462000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:19:19.791687    6845 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:19:19.795700    6845 out.go:97] Downloading VM boot image ...
	I0729 03:19:19.795716    6845 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 03:19:27.337676    6845 out.go:97] Starting "download-only-462000" primary control-plane node in "download-only-462000" cluster
	I0729 03:19:27.337695    6845 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:19:27.391638    6845 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 03:19:27.391644    6845 cache.go:56] Caching tarball of preloaded images
	I0729 03:19:27.392210    6845 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:19:27.396704    6845 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 03:19:27.396710    6845 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:19:27.474385    6845 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 03:19:34.789991    6845 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:19:34.790169    6845 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:19:35.484454    6845 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 03:19:35.484680    6845 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/download-only-462000/config.json ...
	I0729 03:19:35.484700    6845 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/download-only-462000/config.json: {Name:mkcb052033094f2f2cc451596777a23309f06e5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:19:35.485805    6845 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:19:35.486161    6845 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 03:19:35.869108    6845 out.go:169] 
	W0729 03:19:35.875020    6845 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19337-6349/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10692da60 0x10692da60 0x10692da60 0x10692da60 0x10692da60 0x10692da60 0x10692da60] Decompressors:map[bz2:0x14000901920 gz:0x14000901928 tar:0x140009018d0 tar.bz2:0x140009018e0 tar.gz:0x140009018f0 tar.xz:0x14000901900 tar.zst:0x14000901910 tbz2:0x140009018e0 tgz:0x140009018f0 txz:0x14000901900 tzst:0x14000901910 xz:0x14000901930 zip:0x14000901940 zst:0x14000901938] Getters:map[file:0x14001514550 http:0x140005fa1e0 https:0x140005fa230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 03:19:35.875044    6845 out_reason.go:110] 
	W0729 03:19:35.882081    6845 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 03:19:35.886843    6845 out.go:169] 
	
	
	* The control-plane node download-only-462000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-462000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-462000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (17.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-278000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-278000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (17.331824583s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (17.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-278000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-278000: exit status 85 (82.361334ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-462000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
	|         | -p download-only-462000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| delete  | -p download-only-462000        | download-only-462000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| start   | -o=json --download-only        | download-only-278000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
	|         | -p download-only-278000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 03:19:36
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 03:19:36.306777    6870 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:19:36.306954    6870 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:19:36.306957    6870 out.go:304] Setting ErrFile to fd 2...
	I0729 03:19:36.306959    6870 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:19:36.307083    6870 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:19:36.308073    6870 out.go:298] Setting JSON to true
	I0729 03:19:36.327282    6870 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4745,"bootTime":1722243631,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:19:36.327346    6870 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:19:36.332423    6870 out.go:97] [download-only-278000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:19:36.332550    6870 notify.go:220] Checking for updates...
	I0729 03:19:36.336273    6870 out.go:169] MINIKUBE_LOCATION=19337
	I0729 03:19:36.339359    6870 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:19:36.343331    6870 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:19:36.346278    6870 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:19:36.349259    6870 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	W0729 03:19:36.355297    6870 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 03:19:36.355453    6870 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:19:36.358245    6870 out.go:97] Using the qemu2 driver based on user configuration
	I0729 03:19:36.358254    6870 start.go:297] selected driver: qemu2
	I0729 03:19:36.358258    6870 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:19:36.358300    6870 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:19:36.361315    6870 out.go:169] Automatically selected the socket_vmnet network
	I0729 03:19:36.366419    6870 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 03:19:36.366504    6870 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 03:19:36.366537    6870 cni.go:84] Creating CNI manager for ""
	I0729 03:19:36.366543    6870 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:19:36.366548    6870 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:19:36.366579    6870 start.go:340] cluster config:
	{Name:download-only-278000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-278000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:19:36.370016    6870 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:19:36.373211    6870 out.go:97] Starting "download-only-278000" primary control-plane node in "download-only-278000" cluster
	I0729 03:19:36.373218    6870 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:19:36.430624    6870 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:19:36.430645    6870 cache.go:56] Caching tarball of preloaded images
	I0729 03:19:36.431474    6870 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:19:36.435078    6870 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 03:19:36.435085    6870 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:19:36.506814    6870 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 03:19:49.327551    6870 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:19:49.327722    6870 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:19:49.870094    6870 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:19:49.870297    6870 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/download-only-278000/config.json ...
	I0729 03:19:49.870313    6870 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-6349/.minikube/profiles/download-only-278000/config.json: {Name:mk49f7d66da2912c09e8a2c6350bf7968e223880 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:19:49.877161    6870 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:19:49.877285    6870 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-278000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-278000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-278000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (9.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-881000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-881000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (9.81366725s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (9.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-881000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-881000: exit status 85 (80.12175ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-462000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
	|         | -p download-only-462000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| delete  | -p download-only-462000             | download-only-462000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| start   | -o=json --download-only             | download-only-278000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
	|         | -p download-only-278000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| delete  | -p download-only-278000             | download-only-278000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| start   | -o=json --download-only             | download-only-881000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
	|         | -p download-only-881000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 03:19:53
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 03:19:53.919392    6896 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:19:53.919515    6896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:19:53.919519    6896 out.go:304] Setting ErrFile to fd 2...
	I0729 03:19:53.919528    6896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:19:53.919663    6896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:19:53.920758    6896 out.go:298] Setting JSON to true
	I0729 03:19:53.936924    6896 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4762,"bootTime":1722243631,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:19:53.936983    6896 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:19:53.941954    6896 out.go:97] [download-only-881000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:19:53.942058    6896 notify.go:220] Checking for updates...
	I0729 03:19:53.945892    6896 out.go:169] MINIKUBE_LOCATION=19337
	I0729 03:19:53.947636    6896 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:19:53.950947    6896 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:19:53.953947    6896 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:19:53.955469    6896 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	W0729 03:19:53.961948    6896 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 03:19:53.962130    6896 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:19:53.964870    6896 out.go:97] Using the qemu2 driver based on user configuration
	I0729 03:19:53.964878    6896 start.go:297] selected driver: qemu2
	I0729 03:19:53.964883    6896 start.go:901] validating driver "qemu2" against <nil>
	I0729 03:19:53.964920    6896 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:19:53.967905    6896 out.go:169] Automatically selected the socket_vmnet network
	I0729 03:19:53.973031    6896 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 03:19:53.973140    6896 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 03:19:53.973155    6896 cni.go:84] Creating CNI manager for ""
	I0729 03:19:53.973165    6896 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:19:53.973171    6896 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:19:53.973223    6896 start.go:340] cluster config:
	{Name:download-only-881000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:19:53.976581    6896 iso.go:125] acquiring lock: {Name:mka18f53eb8371d218609c5a8479e412cd60b7d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 03:19:53.979944    6896 out.go:97] Starting "download-only-881000" primary control-plane node in "download-only-881000" cluster
	I0729 03:19:53.979950    6896 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 03:19:54.043715    6896 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 03:19:54.043731    6896 cache.go:56] Caching tarball of preloaded images
	I0729 03:19:54.044613    6896 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 03:19:54.047981    6896 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 03:19:54.047989    6896 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 03:19:54.121502    6896 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19337-6349/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-881000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-881000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-881000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.28s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-847000 --alsologtostderr --binary-mirror http://127.0.0.1:51039 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-847000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-847000
--- PASS: TestBinaryMirror (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-797000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-797000: exit status 85 (61.719625ms)

                                                
                                                
-- stdout --
	* Profile "addons-797000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-797000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-797000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-797000: exit status 85 (57.738875ms)

                                                
                                                
-- stdout --
	* Profile "addons-797000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-797000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.54s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 status: exit status 7 (31.919791ms)

                                                
                                                
-- stdout --
	nospam-284000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 status: exit status 7 (29.439709ms)

                                                
                                                
-- stdout --
	nospam-284000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 status: exit status 7 (29.883875ms)

                                                
                                                
-- stdout --
	nospam-284000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 pause: exit status 83 (39.755333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-284000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-284000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 pause: exit status 83 (38.860292ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-284000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-284000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 pause: exit status 83 (39.810291ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-284000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-284000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 unpause: exit status 83 (38.840583ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-284000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-284000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 unpause: exit status 83 (38.932167ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-284000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-284000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 unpause: exit status 83 (38.798917ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-284000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-284000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (9.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 stop: (3.375793875s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 stop: (3.190611709s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-284000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-284000 stop: (3.24440025s)
--- PASS: TestErrorSpam/stop (9.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19337-6349/.minikube/files/etc/test/nested/copy/6843/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-568000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3754779098/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 cache add minikube-local-cache-test:functional-568000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 cache delete minikube-local-cache-test:functional-568000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-568000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 config get cpus: exit status 14 (31.018291ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 config get cpus: exit status 14 (40.28275ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-568000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-568000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (159.283458ms)

                                                
                                                
-- stdout --
	* [functional-568000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:21:41.184230    7477 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:21:41.184421    7477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:21:41.184425    7477 out.go:304] Setting ErrFile to fd 2...
	I0729 03:21:41.184428    7477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:21:41.184601    7477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:21:41.185945    7477 out.go:298] Setting JSON to false
	I0729 03:21:41.205906    7477 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4870,"bootTime":1722243631,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:21:41.205966    7477 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:21:41.210989    7477 out.go:177] * [functional-568000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 03:21:41.216872    7477 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:21:41.216922    7477 notify.go:220] Checking for updates...
	I0729 03:21:41.224796    7477 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:21:41.227899    7477 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:21:41.230885    7477 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:21:41.233773    7477 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:21:41.236829    7477 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:21:41.240154    7477 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:21:41.240457    7477 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:21:41.243822    7477 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 03:21:41.250848    7477 start.go:297] selected driver: qemu2
	I0729 03:21:41.250856    7477 start.go:901] validating driver "qemu2" against &{Name:functional-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:21:41.250908    7477 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:21:41.257860    7477 out.go:177] 
	W0729 03:21:41.261892    7477 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 03:21:41.265864    7477 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-568000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-568000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-568000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (109.935417ms)

                                                
                                                
-- stdout --
	* [functional-568000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:21:41.410486    7488 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:21:41.410620    7488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:21:41.410624    7488 out.go:304] Setting ErrFile to fd 2...
	I0729 03:21:41.410626    7488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:21:41.410750    7488 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-6349/.minikube/bin
	I0729 03:21:41.412135    7488 out.go:298] Setting JSON to false
	I0729 03:21:41.428771    7488 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4870,"bootTime":1722243631,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 03:21:41.428849    7488 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:21:41.432953    7488 out.go:177] * [functional-568000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0729 03:21:41.439922    7488 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:21:41.439993    7488 notify.go:220] Checking for updates...
	I0729 03:21:41.446839    7488 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	I0729 03:21:41.449901    7488 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 03:21:41.452934    7488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:21:41.455940    7488 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	I0729 03:21:41.458854    7488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:21:41.462149    7488 config.go:182] Loaded profile config "functional-568000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:21:41.462417    7488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:21:41.466824    7488 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0729 03:21:41.473869    7488 start.go:297] selected driver: qemu2
	I0729 03:21:41.473875    7488 start.go:901] validating driver "qemu2" against &{Name:functional-568000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-568000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:21:41.473932    7488 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:21:41.479857    7488 out.go:177] 
	W0729 03:21:41.483896    7488 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 03:21:41.487830    7488 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.838502042s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-568000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-568000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image rm docker.io/kicbase/echo-server:functional-568000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-568000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 image save --daemon docker.io/kicbase/echo-server:functional-568000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-568000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "46.817125ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "32.983583ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "44.812833ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "35.890917ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012224959s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-568000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-568000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-568000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-568000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-900000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-900000 --output=json --user=testUser: (1.97148875s)
--- PASS: TestJSONOutput/stop/Command (1.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-841000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-841000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (96.760208ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"348bcdd0-5b6a-47dc-90f8-01f3baa93e44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-841000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fad712cd-1209-47fb-9652-a753c7814aa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19337"}}
	{"specversion":"1.0","id":"44a243b0-7cf7-4f71-9ea1-d522294cd649","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig"}}
	{"specversion":"1.0","id":"6bffb131-4dac-47cd-a94f-08a25d8a1875","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"f480ba55-eedf-4f80-bb76-7300d95ff832","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5fd363d0-ede1-4c8d-a748-35c8037a8576","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube"}}
	{"specversion":"1.0","id":"15c3ad33-14fd-48f0-85ba-be386516f52f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6efcc3d3-eaad-4fb8-8802-b4fbdd06d22d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-841000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-841000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-460000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-460000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (94.752084ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-460000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-6349/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-6349/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-460000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-460000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.136834ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-460000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-460000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.696688375s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.752380208s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-460000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-460000: (1.87486175s)
--- PASS: TestNoKubernetes/serial/Stop (1.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-460000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-460000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.673208ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-460000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-460000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-590000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-363000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-363000 --alsologtostderr -v=3: (2.924831083s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-363000 -n old-k8s-version-363000: exit status 7 (53.973291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-363000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-092000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-092000 --alsologtostderr -v=3: (2.027163917s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-092000 -n no-preload-092000: exit status 7 (30.348333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-092000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-606000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-606000 --alsologtostderr -v=3: (3.687089458s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-503000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-503000 --alsologtostderr -v=3: (3.309345125s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-606000 -n embed-certs-606000: exit status 7 (54.148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-606000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-503000 -n default-k8s-diff-port-503000: exit status 7 (55.551666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-503000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-892000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-892000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-892000 --alsologtostderr -v=3: (3.155518375s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-892000 -n newest-cni-892000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-892000 -n newest-cni-892000: exit status 7 (54.320917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-892000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-568000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2897176883/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722248466071302000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2897176883/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722248466071302000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2897176883/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722248466071302000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2897176883/001/test-1722248466071302000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (51.312042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.843334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.071792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.021416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.764666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.361333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.307208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "sudo umount -f /mount-9p": exit status 83 (44.675417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-568000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-568000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2897176883/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-568000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4154191813/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.43725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.781916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.416208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.415ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.399917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.880875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.109875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "sudo umount -f /mount-9p": exit status 83 (41.64125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-568000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-568000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4154191813/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-568000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3886750129/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-568000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3886750129/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-568000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3886750129/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1: exit status 83 (84.498083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1: exit status 83 (86.531209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1: exit status 83 (84.163958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1: exit status 83 (87.782959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1: exit status 83 (82.355375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1: exit status 83 (86.328584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1: exit status 83 (84.806791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-568000 ssh "findmnt -T" /mount1: exit status 83 (87.077959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-568000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-568000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-568000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3886750129/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-568000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3886750129/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-568000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3886750129/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.25s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-218000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-218000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-218000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-218000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218000"

                                                
                                                
----------------------- debugLogs end: cilium-218000 [took: 2.235584125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-218000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-218000
--- SKIP: TestNetworkPlugins/group/cilium (2.34s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-625000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-625000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard