Test Report: QEMU_macOS 19636

                    
                      a6feba20ebb4dc887776b248ea5c810d31cc7846:2024-09-13:36198
                    
                

Test fail (98/273)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 31.52
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.11
33 TestAddons/parallel/Registry 71.34
45 TestCertOptions 10.3
46 TestCertExpiration 195.33
47 TestDockerFlags 10.12
48 TestForceSystemdFlag 10.07
49 TestForceSystemdEnv 10.61
94 TestFunctional/parallel/ServiceCmdConnect 34.34
166 TestMultiControlPlane/serial/StopSecondaryNode 214.13
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 103.59
168 TestMultiControlPlane/serial/RestartSecondaryNode 208.93
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.41
171 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.02
173 TestMultiControlPlane/serial/StopCluster 202.1
174 TestMultiControlPlane/serial/RestartCluster 5.25
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
176 TestMultiControlPlane/serial/AddSecondaryNode 0.07
180 TestImageBuild/serial/Setup 9.93
183 TestJSONOutput/start/Command 9.9
189 TestJSONOutput/pause/Command 0.08
195 TestJSONOutput/unpause/Command 0.04
212 TestMinikubeProfile 10.11
215 TestMountStart/serial/StartWithMountFirst 9.94
218 TestMultiNode/serial/FreshStart2Nodes 9.89
219 TestMultiNode/serial/DeployApp2Nodes 115.98
220 TestMultiNode/serial/PingHostFrom2Pods 0.09
221 TestMultiNode/serial/AddNode 0.07
222 TestMultiNode/serial/MultiNodeLabels 0.06
223 TestMultiNode/serial/ProfileList 0.08
224 TestMultiNode/serial/CopyFile 0.06
225 TestMultiNode/serial/StopNode 0.14
226 TestMultiNode/serial/StartAfterStop 54.68
227 TestMultiNode/serial/RestartKeepsNodes 9.08
228 TestMultiNode/serial/DeleteNode 0.1
229 TestMultiNode/serial/StopMultiNode 3.58
230 TestMultiNode/serial/RestartMultiNode 5.25
231 TestMultiNode/serial/ValidateNameConflict 20.22
235 TestPreload 10.08
237 TestScheduledStopUnix 10.05
238 TestSkaffold 12.72
241 TestRunningBinaryUpgrade 596.96
243 TestKubernetesUpgrade 18.33
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.56
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.16
259 TestStoppedBinaryUpgrade/Upgrade 575.83
261 TestPause/serial/Start 10.15
271 TestNoKubernetes/serial/StartWithK8s 10.02
272 TestNoKubernetes/serial/StartWithStopK8s 5.31
273 TestNoKubernetes/serial/Start 5.32
277 TestNoKubernetes/serial/StartNoArgs 5.34
279 TestNetworkPlugins/group/auto/Start 9.94
280 TestNetworkPlugins/group/flannel/Start 9.81
281 TestNetworkPlugins/group/kindnet/Start 9.96
282 TestNetworkPlugins/group/enable-default-cni/Start 9.82
283 TestNetworkPlugins/group/bridge/Start 9.82
284 TestNetworkPlugins/group/kubenet/Start 9.8
285 TestNetworkPlugins/group/custom-flannel/Start 9.89
286 TestNetworkPlugins/group/calico/Start 9.78
287 TestNetworkPlugins/group/false/Start 9.89
290 TestStartStop/group/old-k8s-version/serial/FirstStart 9.93
291 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
295 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
296 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
297 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
299 TestStartStop/group/old-k8s-version/serial/Pause 0.1
301 TestStartStop/group/no-preload/serial/FirstStart 9.83
302 TestStartStop/group/no-preload/serial/DeployApp 0.09
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
306 TestStartStop/group/no-preload/serial/SecondStart 5.34
308 TestStartStop/group/embed-certs/serial/FirstStart 11.06
309 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
310 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
311 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
312 TestStartStop/group/no-preload/serial/Pause 0.1
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.95
315 TestStartStop/group/embed-certs/serial/DeployApp 0.09
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
319 TestStartStop/group/embed-certs/serial/SecondStart 5.4
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
324 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
328 TestStartStop/group/embed-certs/serial/Pause 0.1
330 TestStartStop/group/newest-cni/serial/FirstStart 9.95
331 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
333 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
339 TestStartStop/group/newest-cni/serial/SecondStart 5.26
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
343 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (31.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-007000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-007000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (31.517775375s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5d5be49-1c51-4433-8b9b-956106d2f475","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-007000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bb3a18ba-b6a8-4003-a16c-b92e46f08d95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19636"}}
	{"specversion":"1.0","id":"9cb1002a-55e8-4f72-8197-518d0aeefbbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig"}}
	{"specversion":"1.0","id":"1e9a2b23-8464-4277-9fe1-2761f34d2813","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"567e9e5b-ceff-4646-bb53-8eb15eb42e1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"96e1ecc2-8f67-46f9-9bd5-cfe836ec2a8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube"}}
	{"specversion":"1.0","id":"5c8aa70b-eb5b-45ce-89c4-22a5fb4e48d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"d95344e1-13a6-46c9-9937-3bae51273e14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1d26bb5-2b21-43af-8d7f-f38c5f004e8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"b9f20cc6-f480-404d-b272-c6feef53bb6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"90ac852e-ef75-4da6-ad78-98e24705cd8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-007000\" primary control-plane node in \"download-only-007000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"03e00894-fd7f-4128-9f2e-437334fabe75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4871c1c0-15fe-4e3e-984b-4df596f120ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109359720 0x109359720 0x109359720 0x109359720 0x109359720 0x109359720 0x109359720] Decompressors:map[bz2:0x14000121230 gz:0x14000121238 tar:0x14000121190 tar.bz2:0x140001211d0 tar.gz:0x140001211e0 tar.xz:0x14000121210 tar.zst:0x14000121220 tbz2:0x140001211d0 tgz:0x14
0001211e0 txz:0x14000121210 tzst:0x14000121220 xz:0x14000121240 zip:0x14000121250 zst:0x14000121248] Getters:map[file:0x14000065b50 http:0x140007c01e0 https:0x140007c0230] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"ee243e6e-ea87-44a5-8999-30b343d71d4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 11:19:51.756132    1697 out.go:345] Setting OutFile to fd 1 ...
	I0913 11:19:51.756300    1697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:19:51.756303    1697 out.go:358] Setting ErrFile to fd 2...
	I0913 11:19:51.756306    1697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:19:51.756424    1697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	W0913 11:19:51.756507    1697 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19636-1170/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19636-1170/.minikube/config/config.json: no such file or directory
	I0913 11:19:51.757769    1697 out.go:352] Setting JSON to true
	I0913 11:19:51.774677    1697 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1154,"bootTime":1726250437,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 11:19:51.774743    1697 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 11:19:51.780788    1697 out.go:97] [download-only-007000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 11:19:51.780961    1697 notify.go:220] Checking for updates...
	W0913 11:19:51.780984    1697 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 11:19:51.783616    1697 out.go:169] MINIKUBE_LOCATION=19636
	I0913 11:19:51.786756    1697 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 11:19:51.790776    1697 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 11:19:51.793646    1697 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 11:19:51.796651    1697 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	W0913 11:19:51.801183    1697 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 11:19:51.801352    1697 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 11:19:51.805721    1697 out.go:97] Using the qemu2 driver based on user configuration
	I0913 11:19:51.805741    1697 start.go:297] selected driver: qemu2
	I0913 11:19:51.805755    1697 start.go:901] validating driver "qemu2" against <nil>
	I0913 11:19:51.805823    1697 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 11:19:51.808657    1697 out.go:169] Automatically selected the socket_vmnet network
	I0913 11:19:51.814168    1697 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0913 11:19:51.814258    1697 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 11:19:51.814306    1697 cni.go:84] Creating CNI manager for ""
	I0913 11:19:51.814348    1697 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0913 11:19:51.814400    1697 start.go:340] cluster config:
	{Name:download-only-007000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-007000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 11:19:51.819465    1697 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 11:19:51.823702    1697 out.go:97] Downloading VM boot image ...
	I0913 11:19:51.823719    1697 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso
	I0913 11:20:07.894612    1697 out.go:97] Starting "download-only-007000" primary control-plane node in "download-only-007000" cluster
	I0913 11:20:07.894633    1697 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 11:20:07.951659    1697 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 11:20:07.951681    1697 cache.go:56] Caching tarball of preloaded images
	I0913 11:20:07.951833    1697 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 11:20:07.958013    1697 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0913 11:20:07.958024    1697 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 11:20:08.050954    1697 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 11:20:21.503120    1697 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 11:20:21.503283    1697 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 11:20:22.200836    1697 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0913 11:20:22.201083    1697 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/download-only-007000/config.json ...
	I0913 11:20:22.201100    1697 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/download-only-007000/config.json: {Name:mkd7dab0aea3bdd8331068015415d9340f95ea68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:20:22.201363    1697 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 11:20:22.201562    1697 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0913 11:20:23.196578    1697 out.go:193] 
	W0913 11:20:23.202729    1697 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109359720 0x109359720 0x109359720 0x109359720 0x109359720 0x109359720 0x109359720] Decompressors:map[bz2:0x14000121230 gz:0x14000121238 tar:0x14000121190 tar.bz2:0x140001211d0 tar.gz:0x140001211e0 tar.xz:0x14000121210 tar.zst:0x14000121220 tbz2:0x140001211d0 tgz:0x140001211e0 txz:0x14000121210 tzst:0x14000121220 xz:0x14000121240 zip:0x14000121250 zst:0x14000121248] Getters:map[file:0x14000065b50 http:0x140007c01e0 https:0x140007c0230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0913 11:20:23.202754    1697 out_reason.go:110] 
	W0913 11:20:23.210735    1697 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 11:20:23.214607    1697 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-007000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (31.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.11s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-222000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-222000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.962515167s)

                                                
                                                
-- stdout --
	* [offline-docker-222000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-222000" primary control-plane node in "offline-docker-222000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-222000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:06:13.827758    4574 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:06:13.827910    4574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:06:13.827914    4574 out.go:358] Setting ErrFile to fd 2...
	I0913 12:06:13.827917    4574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:06:13.828040    4574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:06:13.829227    4574 out.go:352] Setting JSON to false
	I0913 12:06:13.846880    4574 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3936,"bootTime":1726250437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:06:13.846953    4574 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:06:13.852396    4574 out.go:177] * [offline-docker-222000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:06:13.860280    4574 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:06:13.860315    4574 notify.go:220] Checking for updates...
	I0913 12:06:13.868243    4574 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:06:13.871260    4574 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:06:13.874301    4574 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:06:13.877215    4574 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:06:13.880274    4574 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:06:13.883651    4574 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:06:13.883712    4574 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:06:13.887225    4574 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:06:13.894303    4574 start.go:297] selected driver: qemu2
	I0913 12:06:13.894313    4574 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:06:13.894321    4574 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:06:13.896433    4574 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:06:13.899266    4574 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:06:13.902366    4574 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:06:13.902381    4574 cni.go:84] Creating CNI manager for ""
	I0913 12:06:13.902402    4574 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:06:13.902408    4574 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 12:06:13.902436    4574 start.go:340] cluster config:
	{Name:offline-docker-222000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-222000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:06:13.905997    4574 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:06:13.913250    4574 out.go:177] * Starting "offline-docker-222000" primary control-plane node in "offline-docker-222000" cluster
	I0913 12:06:13.917294    4574 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:06:13.917326    4574 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:06:13.917336    4574 cache.go:56] Caching tarball of preloaded images
	I0913 12:06:13.917409    4574 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:06:13.917414    4574 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:06:13.917483    4574 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/offline-docker-222000/config.json ...
	I0913 12:06:13.917493    4574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/offline-docker-222000/config.json: {Name:mkb139fdc9ac9ad8a0e28bb7dba46dde4a63f753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:06:13.917722    4574 start.go:360] acquireMachinesLock for offline-docker-222000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:06:13.917756    4574 start.go:364] duration metric: took 26.667µs to acquireMachinesLock for "offline-docker-222000"
	I0913 12:06:13.917766    4574 start.go:93] Provisioning new machine with config: &{Name:offline-docker-222000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-222000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:06:13.917790    4574 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:06:13.926281    4574 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 12:06:13.942401    4574 start.go:159] libmachine.API.Create for "offline-docker-222000" (driver="qemu2")
	I0913 12:06:13.942426    4574 client.go:168] LocalClient.Create starting
	I0913 12:06:13.942502    4574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:06:13.942533    4574 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:13.942543    4574 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:13.942590    4574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:06:13.942613    4574 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:13.942621    4574 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:13.942988    4574 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:06:14.099388    4574 main.go:141] libmachine: Creating SSH key...
	I0913 12:06:14.257093    4574 main.go:141] libmachine: Creating Disk image...
	I0913 12:06:14.257103    4574 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:06:14.257317    4574 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/disk.qcow2
	I0913 12:06:14.267109    4574 main.go:141] libmachine: STDOUT: 
	I0913 12:06:14.267136    4574 main.go:141] libmachine: STDERR: 
	I0913 12:06:14.267209    4574 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/disk.qcow2 +20000M
	I0913 12:06:14.276232    4574 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:06:14.276257    4574 main.go:141] libmachine: STDERR: 
	I0913 12:06:14.276275    4574 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/disk.qcow2
	I0913 12:06:14.276281    4574 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:06:14.276292    4574 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:06:14.276323    4574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:2c:32:b8:00:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/disk.qcow2
	I0913 12:06:14.278411    4574 main.go:141] libmachine: STDOUT: 
	I0913 12:06:14.278426    4574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:06:14.278446    4574 client.go:171] duration metric: took 336.020291ms to LocalClient.Create
	I0913 12:06:16.280468    4574 start.go:128] duration metric: took 2.362757458s to createHost
	I0913 12:06:16.280518    4574 start.go:83] releasing machines lock for "offline-docker-222000", held for 2.362851208s
	W0913 12:06:16.280550    4574 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:06:16.294157    4574 out.go:177] * Deleting "offline-docker-222000" in qemu2 ...
	W0913 12:06:16.308222    4574 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:06:16.308233    4574 start.go:729] Will try again in 5 seconds ...
	I0913 12:06:21.310331    4574 start.go:360] acquireMachinesLock for offline-docker-222000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:06:21.310864    4574 start.go:364] duration metric: took 406.75µs to acquireMachinesLock for "offline-docker-222000"
	I0913 12:06:21.311010    4574 start.go:93] Provisioning new machine with config: &{Name:offline-docker-222000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-222000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:06:21.311238    4574 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:06:21.319732    4574 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 12:06:21.373947    4574 start.go:159] libmachine.API.Create for "offline-docker-222000" (driver="qemu2")
	I0913 12:06:21.374001    4574 client.go:168] LocalClient.Create starting
	I0913 12:06:21.374123    4574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:06:21.374183    4574 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:21.374201    4574 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:21.374277    4574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:06:21.374329    4574 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:21.374340    4574 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:21.374891    4574 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:06:21.542332    4574 main.go:141] libmachine: Creating SSH key...
	I0913 12:06:21.695094    4574 main.go:141] libmachine: Creating Disk image...
	I0913 12:06:21.695101    4574 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:06:21.695308    4574 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/disk.qcow2
	I0913 12:06:21.704346    4574 main.go:141] libmachine: STDOUT: 
	I0913 12:06:21.704373    4574 main.go:141] libmachine: STDERR: 
	I0913 12:06:21.704431    4574 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/disk.qcow2 +20000M
	I0913 12:06:21.712180    4574 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:06:21.712205    4574 main.go:141] libmachine: STDERR: 
	I0913 12:06:21.712219    4574 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/disk.qcow2
	I0913 12:06:21.712223    4574 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:06:21.712234    4574 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:06:21.712264    4574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:92:ad:62:a9:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/offline-docker-222000/disk.qcow2
	I0913 12:06:21.713798    4574 main.go:141] libmachine: STDOUT: 
	I0913 12:06:21.713810    4574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:06:21.713823    4574 client.go:171] duration metric: took 339.827875ms to LocalClient.Create
	I0913 12:06:23.715929    4574 start.go:128] duration metric: took 2.404757125s to createHost
	I0913 12:06:23.715978    4574 start.go:83] releasing machines lock for "offline-docker-222000", held for 2.4051855s
	W0913 12:06:23.716348    4574 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-222000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-222000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:06:23.722676    4574 out.go:201] 
	W0913 12:06:23.733830    4574 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:06:23.733877    4574 out.go:270] * 
	* 
	W0913 12:06:23.737055    4574 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:06:23.745575    4574 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-222000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-13 12:06:23.760966 -0700 PDT m=+2792.223125418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-222000 -n offline-docker-222000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-222000 -n offline-docker-222000: exit status 7 (69.300084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-222000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-222000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-222000
--- FAIL: TestOffline (10.11s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.535625ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-ldnch" [c3d43aaf-f9da-480c-814d-b04f250bd74e] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011485333s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-q27zh" [f3c7273d-3e4f-4852-9991-fc159c509855] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010430208s
addons_test.go:338: (dbg) Run:  kubectl --context addons-166000 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-166000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-166000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.058723292s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-166000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-darwin-arm64 -p addons-166000 ip
2024/09/13 11:33:46 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-darwin-arm64 -p addons-166000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-166000 -n addons-166000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-166000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-007000 | jenkins | v1.34.0 | 13 Sep 24 11:19 PDT |                     |
	|         | -p download-only-007000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 13 Sep 24 11:20 PDT | 13 Sep 24 11:20 PDT |
	| delete  | -p download-only-007000                                                                     | download-only-007000 | jenkins | v1.34.0 | 13 Sep 24 11:20 PDT | 13 Sep 24 11:20 PDT |
	| start   | -o=json --download-only                                                                     | download-only-606000 | jenkins | v1.34.0 | 13 Sep 24 11:20 PDT |                     |
	|         | -p download-only-606000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 13 Sep 24 11:20 PDT | 13 Sep 24 11:20 PDT |
	| delete  | -p download-only-606000                                                                     | download-only-606000 | jenkins | v1.34.0 | 13 Sep 24 11:20 PDT | 13 Sep 24 11:20 PDT |
	| delete  | -p download-only-007000                                                                     | download-only-007000 | jenkins | v1.34.0 | 13 Sep 24 11:20 PDT | 13 Sep 24 11:20 PDT |
	| delete  | -p download-only-606000                                                                     | download-only-606000 | jenkins | v1.34.0 | 13 Sep 24 11:20 PDT | 13 Sep 24 11:20 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-525000 | jenkins | v1.34.0 | 13 Sep 24 11:20 PDT |                     |
	|         | binary-mirror-525000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49313                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-525000                                                                     | binary-mirror-525000 | jenkins | v1.34.0 | 13 Sep 24 11:20 PDT | 13 Sep 24 11:20 PDT |
	| addons  | enable dashboard -p                                                                         | addons-166000        | jenkins | v1.34.0 | 13 Sep 24 11:20 PDT |                     |
	|         | addons-166000                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-166000        | jenkins | v1.34.0 | 13 Sep 24 11:20 PDT |                     |
	|         | addons-166000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-166000 --wait=true                                                                | addons-166000        | jenkins | v1.34.0 | 13 Sep 24 11:20 PDT | 13 Sep 24 11:23 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-166000 addons disable                                                                | addons-166000        | jenkins | v1.34.0 | 13 Sep 24 11:24 PDT | 13 Sep 24 11:24 PDT |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-166000        | jenkins | v1.34.0 | 13 Sep 24 11:32 PDT | 13 Sep 24 11:32 PDT |
	|         | -p addons-166000                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-166000 addons disable                                                                | addons-166000        | jenkins | v1.34.0 | 13 Sep 24 11:32 PDT | 13 Sep 24 11:32 PDT |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-166000 addons disable                                                                | addons-166000        | jenkins | v1.34.0 | 13 Sep 24 11:32 PDT | 13 Sep 24 11:33 PDT |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-166000        | jenkins | v1.34.0 | 13 Sep 24 11:33 PDT | 13 Sep 24 11:33 PDT |
	|         | -p addons-166000                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-166000 ssh cat                                                                       | addons-166000        | jenkins | v1.34.0 | 13 Sep 24 11:33 PDT | 13 Sep 24 11:33 PDT |
	|         | /opt/local-path-provisioner/pvc-71858af5-0dcd-4beb-8a2f-1b15243c2fcb_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-166000 addons disable                                                                | addons-166000        | jenkins | v1.34.0 | 13 Sep 24 11:33 PDT |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-166000 ip                                                                            | addons-166000        | jenkins | v1.34.0 | 13 Sep 24 11:33 PDT | 13 Sep 24 11:33 PDT |
	| addons  | addons-166000 addons disable                                                                | addons-166000        | jenkins | v1.34.0 | 13 Sep 24 11:33 PDT | 13 Sep 24 11:33 PDT |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 11:20:37
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 11:20:37.059062    1774 out.go:345] Setting OutFile to fd 1 ...
	I0913 11:20:37.059187    1774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:20:37.059190    1774 out.go:358] Setting ErrFile to fd 2...
	I0913 11:20:37.059192    1774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:20:37.059307    1774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 11:20:37.060342    1774 out.go:352] Setting JSON to false
	I0913 11:20:37.076269    1774 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1200,"bootTime":1726250437,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 11:20:37.076326    1774 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 11:20:37.080674    1774 out.go:177] * [addons-166000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 11:20:37.087721    1774 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 11:20:37.087750    1774 notify.go:220] Checking for updates...
	I0913 11:20:37.094633    1774 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 11:20:37.097662    1774 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 11:20:37.100687    1774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 11:20:37.103656    1774 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 11:20:37.106612    1774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 11:20:37.109868    1774 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 11:20:37.113655    1774 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 11:20:37.120657    1774 start.go:297] selected driver: qemu2
	I0913 11:20:37.120665    1774 start.go:901] validating driver "qemu2" against <nil>
	I0913 11:20:37.120673    1774 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 11:20:37.123001    1774 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 11:20:37.125639    1774 out.go:177] * Automatically selected the socket_vmnet network
	I0913 11:20:37.127020    1774 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 11:20:37.127035    1774 cni.go:84] Creating CNI manager for ""
	I0913 11:20:37.127056    1774 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 11:20:37.127060    1774 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 11:20:37.127089    1774 start.go:340] cluster config:
	{Name:addons-166000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-166000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 11:20:37.130416    1774 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 11:20:37.138675    1774 out.go:177] * Starting "addons-166000" primary control-plane node in "addons-166000" cluster
	I0913 11:20:37.142587    1774 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 11:20:37.142605    1774 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 11:20:37.142614    1774 cache.go:56] Caching tarball of preloaded images
	I0913 11:20:37.142683    1774 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 11:20:37.142689    1774 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 11:20:37.142880    1774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/config.json ...
	I0913 11:20:37.142892    1774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/config.json: {Name:mk84b234fc8da8e31a2d64f025023d1db8ac3732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:20:37.143295    1774 start.go:360] acquireMachinesLock for addons-166000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 11:20:37.143355    1774 start.go:364] duration metric: took 54.042µs to acquireMachinesLock for "addons-166000"
	I0913 11:20:37.143366    1774 start.go:93] Provisioning new machine with config: &{Name:addons-166000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-166000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 11:20:37.143391    1774 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 11:20:37.151579    1774 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0913 11:20:37.477232    1774 start.go:159] libmachine.API.Create for "addons-166000" (driver="qemu2")
	I0913 11:20:37.477270    1774 client.go:168] LocalClient.Create starting
	I0913 11:20:37.477418    1774 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 11:20:37.568743    1774 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 11:20:37.618076    1774 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 11:20:37.870786    1774 main.go:141] libmachine: Creating SSH key...
	I0913 11:20:37.990970    1774 main.go:141] libmachine: Creating Disk image...
	I0913 11:20:37.990980    1774 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 11:20:37.991270    1774 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/disk.qcow2
	I0913 11:20:38.010236    1774 main.go:141] libmachine: STDOUT: 
	I0913 11:20:38.010259    1774 main.go:141] libmachine: STDERR: 
	I0913 11:20:38.010326    1774 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/disk.qcow2 +20000M
	I0913 11:20:38.018369    1774 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 11:20:38.018390    1774 main.go:141] libmachine: STDERR: 
	I0913 11:20:38.018410    1774 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/disk.qcow2
	I0913 11:20:38.018415    1774 main.go:141] libmachine: Starting QEMU VM...
	I0913 11:20:38.018451    1774 qemu.go:418] Using hvf for hardware acceleration
	I0913 11:20:38.018477    1774 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:2c:f3:d4:aa:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/disk.qcow2
	I0913 11:20:38.076968    1774 main.go:141] libmachine: STDOUT: 
	I0913 11:20:38.077005    1774 main.go:141] libmachine: STDERR: 
	I0913 11:20:38.077009    1774 main.go:141] libmachine: Attempt 0
	I0913 11:20:38.077021    1774 main.go:141] libmachine: Searching for d6:2c:f3:d4:aa:7e in /var/db/dhcpd_leases ...
	I0913 11:20:38.077071    1774 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0913 11:20:38.077090    1774 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e5d3b9}
	I0913 11:20:40.079189    1774 main.go:141] libmachine: Attempt 1
	I0913 11:20:40.079271    1774 main.go:141] libmachine: Searching for d6:2c:f3:d4:aa:7e in /var/db/dhcpd_leases ...
	I0913 11:20:40.079644    1774 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0913 11:20:40.079692    1774 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e5d3b9}
	I0913 11:20:42.080818    1774 main.go:141] libmachine: Attempt 2
	I0913 11:20:42.080923    1774 main.go:141] libmachine: Searching for d6:2c:f3:d4:aa:7e in /var/db/dhcpd_leases ...
	I0913 11:20:42.081301    1774 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0913 11:20:42.081363    1774 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e5d3b9}
	I0913 11:20:44.081837    1774 main.go:141] libmachine: Attempt 3
	I0913 11:20:44.081868    1774 main.go:141] libmachine: Searching for d6:2c:f3:d4:aa:7e in /var/db/dhcpd_leases ...
	I0913 11:20:44.081965    1774 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0913 11:20:44.081994    1774 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e5d3b9}
	I0913 11:20:46.083967    1774 main.go:141] libmachine: Attempt 4
	I0913 11:20:46.083974    1774 main.go:141] libmachine: Searching for d6:2c:f3:d4:aa:7e in /var/db/dhcpd_leases ...
	I0913 11:20:46.084005    1774 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0913 11:20:46.084011    1774 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e5d3b9}
	I0913 11:20:48.086042    1774 main.go:141] libmachine: Attempt 5
	I0913 11:20:48.086062    1774 main.go:141] libmachine: Searching for d6:2c:f3:d4:aa:7e in /var/db/dhcpd_leases ...
	I0913 11:20:48.086112    1774 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0913 11:20:48.086124    1774 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e5d3b9}
	I0913 11:20:50.088143    1774 main.go:141] libmachine: Attempt 6
	I0913 11:20:50.088163    1774 main.go:141] libmachine: Searching for d6:2c:f3:d4:aa:7e in /var/db/dhcpd_leases ...
	I0913 11:20:50.088229    1774 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0913 11:20:50.088248    1774 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e5d3b9}
	I0913 11:20:52.090241    1774 main.go:141] libmachine: Attempt 7
	I0913 11:20:52.090261    1774 main.go:141] libmachine: Searching for d6:2c:f3:d4:aa:7e in /var/db/dhcpd_leases ...
	I0913 11:20:52.090392    1774 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0913 11:20:52.090405    1774 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:d6:2c:f3:d4:aa:7e ID:1,d6:2c:f3:d4:aa:7e Lease:0x66e5d402}
	I0913 11:20:52.090409    1774 main.go:141] libmachine: Found match: d6:2c:f3:d4:aa:7e
	I0913 11:20:52.090418    1774 main.go:141] libmachine: IP: 192.168.105.2
	I0913 11:20:52.090422    1774 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0913 11:20:54.108085    1774 machine.go:93] provisionDockerMachine start ...
	I0913 11:20:54.109627    1774 main.go:141] libmachine: Using SSH client type: native
	I0913 11:20:54.110059    1774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048b1190] 0x1048b39d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0913 11:20:54.110075    1774 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 11:20:54.178090    1774 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 11:20:54.178123    1774 buildroot.go:166] provisioning hostname "addons-166000"
	I0913 11:20:54.178261    1774 main.go:141] libmachine: Using SSH client type: native
	I0913 11:20:54.178469    1774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048b1190] 0x1048b39d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0913 11:20:54.178480    1774 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-166000 && echo "addons-166000" | sudo tee /etc/hostname
	I0913 11:20:54.239960    1774 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-166000
	
	I0913 11:20:54.240061    1774 main.go:141] libmachine: Using SSH client type: native
	I0913 11:20:54.240217    1774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048b1190] 0x1048b39d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0913 11:20:54.240228    1774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-166000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-166000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-166000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 11:20:54.289347    1774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 11:20:54.289359    1774 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19636-1170/.minikube CaCertPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19636-1170/.minikube}
	I0913 11:20:54.289368    1774 buildroot.go:174] setting up certificates
	I0913 11:20:54.289374    1774 provision.go:84] configureAuth start
	I0913 11:20:54.289378    1774 provision.go:143] copyHostCerts
	I0913 11:20:54.289514    1774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.pem (1078 bytes)
	I0913 11:20:54.289747    1774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19636-1170/.minikube/cert.pem (1123 bytes)
	I0913 11:20:54.289874    1774 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19636-1170/.minikube/key.pem (1679 bytes)
	I0913 11:20:54.289975    1774 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca-key.pem org=jenkins.addons-166000 san=[127.0.0.1 192.168.105.2 addons-166000 localhost minikube]
	I0913 11:20:54.364328    1774 provision.go:177] copyRemoteCerts
	I0913 11:20:54.364383    1774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 11:20:54.364390    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:20:54.387833    1774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 11:20:54.395845    1774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 11:20:54.403658    1774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 11:20:54.411655    1774 provision.go:87] duration metric: took 122.280917ms to configureAuth
	I0913 11:20:54.411666    1774 buildroot.go:189] setting minikube options for container-runtime
	I0913 11:20:54.411779    1774 config.go:182] Loaded profile config "addons-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 11:20:54.411825    1774 main.go:141] libmachine: Using SSH client type: native
	I0913 11:20:54.411909    1774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048b1190] 0x1048b39d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0913 11:20:54.411914    1774 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0913 11:20:54.455744    1774 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0913 11:20:54.455754    1774 buildroot.go:70] root file system type: tmpfs
	I0913 11:20:54.455806    1774 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0913 11:20:54.455861    1774 main.go:141] libmachine: Using SSH client type: native
	I0913 11:20:54.455964    1774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048b1190] 0x1048b39d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0913 11:20:54.456000    1774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0913 11:20:54.504083    1774 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0913 11:20:54.504138    1774 main.go:141] libmachine: Using SSH client type: native
	I0913 11:20:54.504257    1774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048b1190] 0x1048b39d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0913 11:20:54.504265    1774 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0913 11:20:55.862157    1774 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0913 11:20:55.862171    1774 machine.go:96] duration metric: took 1.754123667s to provisionDockerMachine
	I0913 11:20:55.862177    1774 client.go:171] duration metric: took 18.385575583s to LocalClient.Create
	I0913 11:20:55.862192    1774 start.go:167] duration metric: took 18.385639375s to libmachine.API.Create "addons-166000"
	I0913 11:20:55.862198    1774 start.go:293] postStartSetup for "addons-166000" (driver="qemu2")
	I0913 11:20:55.862204    1774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 11:20:55.862283    1774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 11:20:55.862293    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:20:55.888909    1774 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 11:20:55.890604    1774 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 11:20:55.890618    1774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19636-1170/.minikube/addons for local assets ...
	I0913 11:20:55.890712    1774 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19636-1170/.minikube/files for local assets ...
	I0913 11:20:55.890744    1774 start.go:296] duration metric: took 28.543416ms for postStartSetup
	I0913 11:20:55.891150    1774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/config.json ...
	I0913 11:20:55.891333    1774 start.go:128] duration metric: took 18.748625708s to createHost
	I0913 11:20:55.891364    1774 main.go:141] libmachine: Using SSH client type: native
	I0913 11:20:55.891454    1774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048b1190] 0x1048b39d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0913 11:20:55.891459    1774 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 11:20:55.931738    1774 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726251655.937818378
	
	I0913 11:20:55.931747    1774 fix.go:216] guest clock: 1726251655.937818378
	I0913 11:20:55.931752    1774 fix.go:229] Guest: 2024-09-13 11:20:55.937818378 -0700 PDT Remote: 2024-09-13 11:20:55.891336 -0700 PDT m=+18.852022835 (delta=46.482378ms)
	I0913 11:20:55.931762    1774 fix.go:200] guest clock delta is within tolerance: 46.482378ms
	I0913 11:20:55.931765    1774 start.go:83] releasing machines lock for "addons-166000", held for 18.789094208s
	I0913 11:20:55.932081    1774 ssh_runner.go:195] Run: cat /version.json
	I0913 11:20:55.932088    1774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 11:20:55.932091    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:20:55.932130    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:20:55.954848    1774 ssh_runner.go:195] Run: systemctl --version
	I0913 11:20:56.052169    1774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 11:20:56.054664    1774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 11:20:56.054711    1774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 11:20:56.062727    1774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 11:20:56.062736    1774 start.go:495] detecting cgroup driver to use...
	I0913 11:20:56.062878    1774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 11:20:56.070902    1774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0913 11:20:56.075228    1774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0913 11:20:56.079331    1774 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0913 11:20:56.079363    1774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0913 11:20:56.083267    1774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 11:20:56.086964    1774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0913 11:20:56.090665    1774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 11:20:56.094589    1774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 11:20:56.098457    1774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0913 11:20:56.102527    1774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0913 11:20:56.106313    1774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0913 11:20:56.110364    1774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 11:20:56.114176    1774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 11:20:56.117937    1774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 11:20:56.183551    1774 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0913 11:20:56.194884    1774 start.go:495] detecting cgroup driver to use...
	I0913 11:20:56.194952    1774 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0913 11:20:56.200441    1774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 11:20:56.205713    1774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 11:20:56.213830    1774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 11:20:56.219264    1774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 11:20:56.224770    1774 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0913 11:20:56.270155    1774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 11:20:56.276658    1774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 11:20:56.283291    1774 ssh_runner.go:195] Run: which cri-dockerd
	I0913 11:20:56.284746    1774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0913 11:20:56.288055    1774 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0913 11:20:56.293871    1774 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0913 11:20:56.358902    1774 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0913 11:20:56.422304    1774 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0913 11:20:56.422365    1774 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0913 11:20:56.428421    1774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 11:20:56.489612    1774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 11:20:58.670229    1774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.180680875s)
	I0913 11:20:58.670300    1774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0913 11:20:58.675727    1774 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0913 11:20:58.682419    1774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 11:20:58.687987    1774 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0913 11:20:58.749150    1774 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0913 11:20:58.810755    1774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 11:20:58.874468    1774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0913 11:20:58.881376    1774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 11:20:58.886326    1774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 11:20:58.950942    1774 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0913 11:20:58.976458    1774 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0913 11:20:58.976588    1774 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0913 11:20:58.979033    1774 start.go:563] Will wait 60s for crictl version
	I0913 11:20:58.979090    1774 ssh_runner.go:195] Run: which crictl
	I0913 11:20:58.980482    1774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 11:20:58.998796    1774 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0913 11:20:58.998877    1774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 11:20:59.010506    1774 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 11:20:59.024672    1774 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0913 11:20:59.024756    1774 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0913 11:20:59.026276    1774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 11:20:59.030733    1774 kubeadm.go:883] updating cluster {Name:addons-166000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-166000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 11:20:59.030788    1774 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 11:20:59.030841    1774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 11:20:59.036145    1774 docker.go:685] Got preloaded images: 
	I0913 11:20:59.036153    1774 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0913 11:20:59.036200    1774 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 11:20:59.039482    1774 ssh_runner.go:195] Run: which lz4
	I0913 11:20:59.040832    1774 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 11:20:59.042137    1774 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 11:20:59.042154    1774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0913 11:21:00.271564    1774 docker.go:649] duration metric: took 1.230813209s to copy over tarball
	I0913 11:21:00.271635    1774 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 11:21:01.211195    1774 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 11:21:01.225818    1774 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 11:21:01.229708    1774 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0913 11:21:01.235899    1774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 11:21:01.313826    1774 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 11:21:03.664543    1774 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.350787125s)
	I0913 11:21:03.664650    1774 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 11:21:03.670891    1774 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 11:21:03.670904    1774 cache_images.go:84] Images are preloaded, skipping loading
	I0913 11:21:03.670909    1774 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0913 11:21:03.670989    1774 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-166000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-166000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 11:21:03.671057    1774 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0913 11:21:03.691949    1774 cni.go:84] Creating CNI manager for ""
	I0913 11:21:03.691961    1774 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 11:21:03.691965    1774 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 11:21:03.691981    1774 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-166000 NodeName:addons-166000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 11:21:03.692037    1774 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-166000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 11:21:03.692120    1774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 11:21:03.695611    1774 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 11:21:03.695647    1774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 11:21:03.698957    1774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0913 11:21:03.704859    1774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 11:21:03.710412    1774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0913 11:21:03.716235    1774 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0913 11:21:03.717621    1774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 11:21:03.721873    1774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 11:21:03.786882    1774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 11:21:03.796963    1774 certs.go:68] Setting up /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000 for IP: 192.168.105.2
	I0913 11:21:03.796972    1774 certs.go:194] generating shared ca certs ...
	I0913 11:21:03.796980    1774 certs.go:226] acquiring lock for ca certs: {Name:mka395184640c64d3892ae138bcca34b27eb400d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:21:03.797166    1774 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.key
	I0913 11:21:03.987929    1774 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt ...
	I0913 11:21:03.987941    1774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt: {Name:mke81fb861e127e9bf01d593ead510e2830141a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:21:03.988265    1774 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.key ...
	I0913 11:21:03.988269    1774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.key: {Name:mk9df5649822e10bf84ca8c364c7411047256ab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:21:03.988418    1774 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.key
	I0913 11:21:04.280908    1774 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.crt ...
	I0913 11:21:04.280931    1774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.crt: {Name:mkf5364cff06d4394739f0c4e384d4fbd28b6d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:21:04.281332    1774 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.key ...
	I0913 11:21:04.281336    1774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.key: {Name:mk9c1ee3c48424e554a8d0bfb6210cb50272396c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:21:04.281468    1774 certs.go:256] generating profile certs ...
	I0913 11:21:04.281509    1774 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.key
	I0913 11:21:04.281517    1774 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt with IP's: []
	I0913 11:21:04.460359    1774 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt ...
	I0913 11:21:04.460365    1774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: {Name:mk9f0664f179a0319d01b7a95df22975f7af2984 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:21:04.460545    1774 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.key ...
	I0913 11:21:04.460548    1774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.key: {Name:mk4f2970b1381896d92d848f7b9d8415020c6abf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:21:04.460670    1774 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/apiserver.key.b47e9557
	I0913 11:21:04.460681    1774 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/apiserver.crt.b47e9557 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0913 11:21:04.564955    1774 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/apiserver.crt.b47e9557 ...
	I0913 11:21:04.564966    1774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/apiserver.crt.b47e9557: {Name:mkfb8b8929cde541f337d9182f95fa5fe4948c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:21:04.565157    1774 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/apiserver.key.b47e9557 ...
	I0913 11:21:04.565161    1774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/apiserver.key.b47e9557: {Name:mkba64bab61a4d8b09435bf930fe7b429ff42fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:21:04.565269    1774 certs.go:381] copying /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/apiserver.crt.b47e9557 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/apiserver.crt
	I0913 11:21:04.565500    1774 certs.go:385] copying /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/apiserver.key.b47e9557 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/apiserver.key
	I0913 11:21:04.565601    1774 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/proxy-client.key
	I0913 11:21:04.565620    1774 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/proxy-client.crt with IP's: []
	I0913 11:21:04.631569    1774 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/proxy-client.crt ...
	I0913 11:21:04.631573    1774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/proxy-client.crt: {Name:mk11b623342d1b13149d9fca9d760bcc292f56c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:21:04.631713    1774 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/proxy-client.key ...
	I0913 11:21:04.631716    1774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/proxy-client.key: {Name:mk7f7e2a259a8c7d9e8b8bacff76ee5541070854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:21:04.631958    1774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 11:21:04.631983    1774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem (1078 bytes)
	I0913 11:21:04.632001    1774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem (1123 bytes)
	I0913 11:21:04.632019    1774 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/key.pem (1679 bytes)
	I0913 11:21:04.632423    1774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 11:21:04.642751    1774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 11:21:04.652709    1774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 11:21:04.661173    1774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 11:21:04.671256    1774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0913 11:21:04.679122    1774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 11:21:04.687268    1774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 11:21:04.695357    1774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 11:21:04.703411    1774 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 11:21:04.711395    1774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 11:21:04.718094    1774 ssh_runner.go:195] Run: openssl version
	I0913 11:21:04.720617    1774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 11:21:04.724257    1774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 11:21:04.725801    1774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:21 /usr/share/ca-certificates/minikubeCA.pem
	I0913 11:21:04.725825    1774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 11:21:04.727926    1774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 11:21:04.731383    1774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 11:21:04.732801    1774 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 11:21:04.732840    1774 kubeadm.go:392] StartCluster: {Name:addons-166000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-166000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 11:21:04.732920    1774 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 11:21:04.738094    1774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 11:21:04.742130    1774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 11:21:04.745955    1774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 11:21:04.749711    1774 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 11:21:04.749716    1774 kubeadm.go:157] found existing configuration files:
	
	I0913 11:21:04.749739    1774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 11:21:04.753227    1774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 11:21:04.753251    1774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 11:21:04.756763    1774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 11:21:04.759804    1774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 11:21:04.759830    1774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 11:21:04.762977    1774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 11:21:04.766250    1774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 11:21:04.766273    1774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 11:21:04.769821    1774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 11:21:04.773467    1774 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 11:21:04.773501    1774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 11:21:04.777073    1774 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 11:21:04.798067    1774 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 11:21:04.798096    1774 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 11:21:04.835400    1774 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 11:21:04.835452    1774 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 11:21:04.835498    1774 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 11:21:04.839767    1774 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 11:21:04.851005    1774 out.go:235]   - Generating certificates and keys ...
	I0913 11:21:04.851039    1774 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 11:21:04.851099    1774 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 11:21:04.892971    1774 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 11:21:05.075770    1774 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 11:21:05.229284    1774 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 11:21:05.381143    1774 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 11:21:05.511084    1774 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 11:21:05.511140    1774 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-166000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0913 11:21:05.585823    1774 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 11:21:05.585899    1774 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-166000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0913 11:21:05.640256    1774 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 11:21:05.855358    1774 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 11:21:05.915230    1774 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 11:21:05.915267    1774 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 11:21:06.086363    1774 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 11:21:06.121094    1774 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 11:21:06.230729    1774 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 11:21:06.296359    1774 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 11:21:06.390362    1774 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 11:21:06.390418    1774 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 11:21:06.390456    1774 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 11:21:06.393739    1774 out.go:235]   - Booting up control plane ...
	I0913 11:21:06.393791    1774 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 11:21:06.393827    1774 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 11:21:06.393864    1774 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 11:21:06.398638    1774 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 11:21:06.400871    1774 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 11:21:06.400895    1774 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 11:21:06.469312    1774 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 11:21:06.469379    1774 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 11:21:06.979778    1774 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 510.376834ms
	I0913 11:21:06.979937    1774 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 11:21:10.481049    1774 kubeadm.go:310] [api-check] The API server is healthy after 3.501088293s
	I0913 11:21:10.493395    1774 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 11:21:10.502225    1774 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 11:21:10.512852    1774 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 11:21:10.513004    1774 kubeadm.go:310] [mark-control-plane] Marking the node addons-166000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 11:21:10.517008    1774 kubeadm.go:310] [bootstrap-token] Using token: bop6fb.3pz03d6kaqe1ive9
	I0913 11:21:10.518598    1774 out.go:235]   - Configuring RBAC rules ...
	I0913 11:21:10.518664    1774 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 11:21:10.521668    1774 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 11:21:10.525796    1774 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 11:21:10.526854    1774 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 11:21:10.527892    1774 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 11:21:10.529002    1774 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 11:21:10.888440    1774 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 11:21:11.292831    1774 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 11:21:11.886294    1774 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 11:21:11.887554    1774 kubeadm.go:310] 
	I0913 11:21:11.887620    1774 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 11:21:11.887635    1774 kubeadm.go:310] 
	I0913 11:21:11.887740    1774 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 11:21:11.887754    1774 kubeadm.go:310] 
	I0913 11:21:11.887788    1774 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 11:21:11.887847    1774 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 11:21:11.887909    1774 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 11:21:11.887916    1774 kubeadm.go:310] 
	I0913 11:21:11.887975    1774 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 11:21:11.887983    1774 kubeadm.go:310] 
	I0913 11:21:11.888029    1774 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 11:21:11.888037    1774 kubeadm.go:310] 
	I0913 11:21:11.888126    1774 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 11:21:11.888206    1774 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 11:21:11.888321    1774 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 11:21:11.888330    1774 kubeadm.go:310] 
	I0913 11:21:11.888444    1774 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 11:21:11.888535    1774 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 11:21:11.888548    1774 kubeadm.go:310] 
	I0913 11:21:11.888669    1774 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bop6fb.3pz03d6kaqe1ive9 \
	I0913 11:21:11.888778    1774 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de4133d636792fabd6bfbc110ddb1dc6f2d65eb5bd69b961dc2a84dacbe86065 \
	I0913 11:21:11.888803    1774 kubeadm.go:310] 	--control-plane 
	I0913 11:21:11.888808    1774 kubeadm.go:310] 
	I0913 11:21:11.888882    1774 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 11:21:11.888888    1774 kubeadm.go:310] 
	I0913 11:21:11.888967    1774 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bop6fb.3pz03d6kaqe1ive9 \
	I0913 11:21:11.889087    1774 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de4133d636792fabd6bfbc110ddb1dc6f2d65eb5bd69b961dc2a84dacbe86065 
	I0913 11:21:11.889443    1774 kubeadm.go:310] W0913 18:21:04.803326    1594 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 11:21:11.889774    1774 kubeadm.go:310] W0913 18:21:04.803680    1594 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 11:21:11.889915    1774 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 11:21:11.889943    1774 cni.go:84] Creating CNI manager for ""
	I0913 11:21:11.889994    1774 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 11:21:11.893869    1774 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 11:21:11.896742    1774 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 11:21:11.904852    1774 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 11:21:11.916237    1774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 11:21:11.916319    1774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 11:21:11.916333    1774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-166000 minikube.k8s.io/updated_at=2024_09_13T11_21_11_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=addons-166000 minikube.k8s.io/primary=true
	I0913 11:21:11.929413    1774 ops.go:34] apiserver oom_adj: -16
	I0913 11:21:11.987897    1774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 11:21:12.490011    1774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 11:21:12.990197    1774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 11:21:13.489957    1774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 11:21:13.989126    1774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 11:21:14.489248    1774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 11:21:14.990141    1774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 11:21:15.489900    1774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 11:21:15.989981    1774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 11:21:16.489931    1774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 11:21:16.989752    1774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 11:21:17.054753    1774 kubeadm.go:1113] duration metric: took 5.138683166s to wait for elevateKubeSystemPrivileges
	I0913 11:21:17.054771    1774 kubeadm.go:394] duration metric: took 12.32238375s to StartCluster
	I0913 11:21:17.054783    1774 settings.go:142] acquiring lock: {Name:mk30414fb8bdc9357b580933d1c04157a3bd6358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:21:17.054951    1774 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 11:21:17.055135    1774 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/kubeconfig: {Name:mk70034871f305cb9ef95a7630262c04e6c4f7f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:21:17.055381    1774 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 11:21:17.055389    1774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 11:21:17.055440    1774 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0913 11:21:17.055486    1774 addons.go:69] Setting yakd=true in profile "addons-166000"
	I0913 11:21:17.055495    1774 addons.go:234] Setting addon yakd=true in "addons-166000"
	I0913 11:21:17.055505    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:17.055510    1774 addons.go:69] Setting inspektor-gadget=true in profile "addons-166000"
	I0913 11:21:17.055517    1774 addons.go:234] Setting addon inspektor-gadget=true in "addons-166000"
	I0913 11:21:17.055531    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:17.055589    1774 config.go:182] Loaded profile config "addons-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 11:21:17.055589    1774 addons.go:69] Setting storage-provisioner=true in profile "addons-166000"
	I0913 11:21:17.055599    1774 addons.go:234] Setting addon storage-provisioner=true in "addons-166000"
	I0913 11:21:17.055598    1774 addons.go:69] Setting ingress=true in profile "addons-166000"
	I0913 11:21:17.055617    1774 addons.go:69] Setting metrics-server=true in profile "addons-166000"
	I0913 11:21:17.055623    1774 addons.go:234] Setting addon metrics-server=true in "addons-166000"
	I0913 11:21:17.055622    1774 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-166000"
	I0913 11:21:17.055630    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:17.055630    1774 addons.go:234] Setting addon ingress=true in "addons-166000"
	I0913 11:21:17.055638    1774 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-166000"
	I0913 11:21:17.055644    1774 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-166000"
	I0913 11:21:17.055667    1774 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-166000"
	I0913 11:21:17.055630    1774 addons.go:69] Setting registry=true in profile "addons-166000"
	I0913 11:21:17.055683    1774 addons.go:234] Setting addon registry=true in "addons-166000"
	I0913 11:21:17.055697    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:17.055703    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:17.055826    1774 addons.go:69] Setting ingress-dns=true in profile "addons-166000"
	I0913 11:21:17.055617    1774 addons.go:69] Setting cloud-spanner=true in profile "addons-166000"
	I0913 11:21:17.055832    1774 addons.go:234] Setting addon ingress-dns=true in "addons-166000"
	I0913 11:21:17.055601    1774 addons.go:69] Setting default-storageclass=true in profile "addons-166000"
	I0913 11:21:17.055836    1774 addons.go:69] Setting gcp-auth=true in profile "addons-166000"
	I0913 11:21:17.055875    1774 mustload.go:65] Loading cluster: addons-166000
	I0913 11:21:17.055921    1774 retry.go:31] will retry after 1.299815887s: connect: dial unix /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/monitor: connect: connection refused
	I0913 11:21:17.055626    1774 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-166000"
	I0913 11:21:17.055949    1774 retry.go:31] will retry after 1.307178764s: connect: dial unix /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/monitor: connect: connection refused
	I0913 11:21:17.055985    1774 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-166000"
	I0913 11:21:17.055991    1774 config.go:182] Loaded profile config "addons-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 11:21:17.056018    1774 retry.go:31] will retry after 951.716263ms: connect: dial unix /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/monitor: connect: connection refused
	I0913 11:21:17.055670    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:17.056029    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:17.055634    1774 addons.go:69] Setting volcano=true in profile "addons-166000"
	I0913 11:21:17.056063    1774 addons.go:234] Setting addon volcano=true in "addons-166000"
	I0913 11:21:17.056071    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:17.056113    1774 retry.go:31] will retry after 972.540382ms: connect: dial unix /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/monitor: connect: connection refused
	I0913 11:21:17.055839    1774 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-166000"
	I0913 11:21:17.055827    1774 addons.go:69] Setting volumesnapshots=true in profile "addons-166000"
	I0913 11:21:17.056183    1774 addons.go:234] Setting addon volumesnapshots=true in "addons-166000"
	I0913 11:21:17.056190    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:17.056213    1774 retry.go:31] will retry after 1.097786277s: connect: dial unix /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/monitor: connect: connection refused
	I0913 11:21:17.056376    1774 retry.go:31] will retry after 823.6136ms: connect: dial unix /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/monitor: connect: connection refused
	I0913 11:21:17.055833    1774 addons.go:234] Setting addon cloud-spanner=true in "addons-166000"
	I0913 11:21:17.056418    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:17.056420    1774 retry.go:31] will retry after 1.23684159s: connect: dial unix /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/monitor: connect: connection refused
	I0913 11:21:17.055613    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:17.056552    1774 retry.go:31] will retry after 807.589118ms: connect: dial unix /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/monitor: connect: connection refused
	I0913 11:21:17.056561    1774 retry.go:31] will retry after 724.678931ms: connect: dial unix /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/monitor: connect: connection refused
	I0913 11:21:17.055843    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:17.056605    1774 retry.go:31] will retry after 959.968671ms: connect: dial unix /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/monitor: connect: connection refused
	I0913 11:21:17.056687    1774 retry.go:31] will retry after 952.85093ms: connect: dial unix /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/monitor: connect: connection refused
	I0913 11:21:17.056786    1774 retry.go:31] will retry after 573.827211ms: connect: dial unix /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/monitor: connect: connection refused
	I0913 11:21:17.056788    1774 retry.go:31] will retry after 1.011870298s: connect: dial unix /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/monitor: connect: connection refused
	I0913 11:21:17.059728    1774 out.go:177] * Verifying Kubernetes components...
	I0913 11:21:17.067679    1774 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0913 11:21:17.067682    1774 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0913 11:21:17.071743    1774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 11:21:17.075597    1774 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0913 11:21:17.075605    1774 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0913 11:21:17.075615    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:17.079673    1774 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0913 11:21:17.079681    1774 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0913 11:21:17.079688    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:17.116348    1774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 11:21:17.183861    1774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 11:21:17.210960    1774 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 11:21:17.210971    1774 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0913 11:21:17.232112    1774 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0913 11:21:17.232124    1774 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0913 11:21:17.240603    1774 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 11:21:17.240614    1774 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0913 11:21:17.256195    1774 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0913 11:21:17.256210    1774 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0913 11:21:17.278517    1774 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0913 11:21:17.278534    1774 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0913 11:21:17.291874    1774 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 11:21:17.291887    1774 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0913 11:21:17.337573    1774 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0913 11:21:17.337587    1774 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0913 11:21:17.353643    1774 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0913 11:21:17.353653    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0913 11:21:17.366990    1774 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 11:21:17.367004    1774 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0913 11:21:17.405414    1774 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0913 11:21:17.405428    1774 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0913 11:21:17.416189    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0913 11:21:17.421999    1774 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 11:21:17.422007    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0913 11:21:17.445841    1774 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0913 11:21:17.447364    1774 node_ready.go:35] waiting up to 6m0s for node "addons-166000" to be "Ready" ...
	I0913 11:21:17.454192    1774 node_ready.go:49] node "addons-166000" has status "Ready":"True"
	I0913 11:21:17.454210    1774 node_ready.go:38] duration metric: took 6.824167ms for node "addons-166000" to be "Ready" ...
	I0913 11:21:17.454215    1774 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 11:21:17.462307    1774 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4nzpx" in "kube-system" namespace to be "Ready" ...
	I0913 11:21:17.482256    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 11:21:17.638210    1774 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 11:21:17.642269    1774 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 11:21:17.642279    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 11:21:17.642288    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:17.694819    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 11:21:17.735937    1774 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-166000 service yakd-dashboard -n yakd-dashboard
	
	I0913 11:21:17.787923    1774 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0913 11:21:17.791891    1774 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 11:21:17.795836    1774 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 11:21:17.799944    1774 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 11:21:17.799951    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0913 11:21:17.799960    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:17.865378    1774 addons.go:234] Setting addon default-storageclass=true in "addons-166000"
	I0913 11:21:17.865398    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:17.866014    1774 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 11:21:17.866020    1774 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 11:21:17.866026    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:17.884889    1774 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0913 11:21:17.888891    1774 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 11:21:17.888899    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0913 11:21:17.888910    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:17.891496    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 11:21:17.943279    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 11:21:17.949637    1774 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-166000" context rescaled to 1 replicas
	I0913 11:21:17.968473    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 11:21:18.013752    1774 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0913 11:21:18.017765    1774 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0913 11:21:18.021646    1774 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0913 11:21:18.025785    1774 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0913 11:21:18.029740    1774 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0913 11:21:18.033701    1774 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0913 11:21:18.033732    1774 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0913 11:21:18.033739    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0913 11:21:18.033748    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:18.033709    1774 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0913 11:21:18.043748    1774 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0913 11:21:18.043753    1774 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0913 11:21:18.043755    1774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 11:21:18.043787    1774 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0913 11:21:18.043892    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:18.053574    1774 out.go:177]   - Using image docker.io/registry:2.8.3
	I0913 11:21:18.057678    1774 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0913 11:21:18.057718    1774 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0913 11:21:18.057778    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0913 11:21:18.057790    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:18.063736    1774 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0913 11:21:18.066687    1774 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 11:21:18.066700    1774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0913 11:21:18.066713    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:18.072784    1774 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0913 11:21:18.075824    1774 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 11:21:18.075833    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0913 11:21:18.075842    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:18.124270    1774 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 11:21:18.124283    1774 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0913 11:21:18.154511    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:18.187816    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 11:21:18.216492    1774 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 11:21:18.216506    1774 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0913 11:21:18.216714    1774 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 11:21:18.216727    1774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0913 11:21:18.245031    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0913 11:21:18.266282    1774 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 11:21:18.266293    1774 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0913 11:21:18.275347    1774 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 11:21:18.275361    1774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0913 11:21:18.280172    1774 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0913 11:21:18.280184    1774 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0913 11:21:18.286555    1774 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 11:21:18.286569    1774 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0913 11:21:18.297760    1774 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0913 11:21:18.301707    1774 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0913 11:21:18.304726    1774 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0913 11:21:18.308093    1774 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 11:21:18.308103    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0913 11:21:18.308115    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:18.308383    1774 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 11:21:18.308388    1774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0913 11:21:18.327090    1774 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0913 11:21:18.327100    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0913 11:21:18.341694    1774 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 11:21:18.341704    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0913 11:21:18.359745    1774 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0913 11:21:18.363668    1774 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 11:21:18.363681    1774 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 11:21:18.363693    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:18.363971    1774 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 11:21:18.363977    1774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0913 11:21:18.364745    1774 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-166000"
	I0913 11:21:18.364761    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:18.368578    1774 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0913 11:21:18.377679    1774 out.go:177]   - Using image docker.io/busybox:stable
	I0913 11:21:18.383730    1774 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 11:21:18.383738    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0913 11:21:18.383748    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:18.384041    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0913 11:21:18.384110    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 11:21:18.465591    1774 pod_ready.go:93] pod "coredns-7c65d6cfc9-4nzpx" in "kube-system" namespace has status "Ready":"True"
	I0913 11:21:18.465606    1774 pod_ready.go:82] duration metric: took 1.003321541s for pod "coredns-7c65d6cfc9-4nzpx" in "kube-system" namespace to be "Ready" ...
	I0913 11:21:18.465614    1774 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-68gld" in "kube-system" namespace to be "Ready" ...
	I0913 11:21:18.478594    1774 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 11:21:18.478607    1774 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0913 11:21:18.511477    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 11:21:18.533369    1774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 11:21:18.533381    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0913 11:21:18.590631    1774 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 11:21:18.590643    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0913 11:21:18.594688    1774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 11:21:18.594697    1774 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0913 11:21:18.648168    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 11:21:18.669318    1774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 11:21:18.669334    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0913 11:21:18.678954    1774 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 11:21:18.678964    1774 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 11:21:18.713695    1774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 11:21:18.713706    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0913 11:21:18.736109    1774 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 11:21:18.736121    1774 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 11:21:18.780807    1774 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 11:21:18.780824    1774 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0913 11:21:18.819442    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 11:21:18.846197    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 11:21:19.588687    1774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.697237167s)
	I0913 11:21:19.588727    1774 addons.go:475] Verifying addon ingress=true in "addons-166000"
	I0913 11:21:19.588810    1774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.401035375s)
	I0913 11:21:19.588774    1774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.6455405s)
	I0913 11:21:19.588786    1774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.620364625s)
	I0913 11:21:19.588828    1774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.343839417s)
	I0913 11:21:19.593454    1774 out.go:177] * Verifying ingress addon...
	I0913 11:21:19.601862    1774 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0913 11:21:19.605333    1774 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0913 11:21:19.605340    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:20.076569    1774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.692573417s)
	I0913 11:21:20.076592    1774 addons.go:475] Verifying addon registry=true in "addons-166000"
	I0913 11:21:20.076616    1774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.692557125s)
	W0913 11:21:20.076638    1774 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 11:21:20.076740    1774 retry.go:31] will retry after 228.493652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 11:21:20.079942    1774 out.go:177] * Verifying registry addon...
	I0913 11:21:20.087352    1774 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0913 11:21:20.096383    1774 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0913 11:21:20.096395    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:20.117540    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:20.307379    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 11:21:20.477394    1774 pod_ready.go:103] pod "coredns-7c65d6cfc9-68gld" in "kube-system" namespace has status "Ready":"False"
	I0913 11:21:20.591538    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:20.606214    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:20.970359    1774 pod_ready.go:93] pod "coredns-7c65d6cfc9-68gld" in "kube-system" namespace has status "Ready":"True"
	I0913 11:21:20.970368    1774 pod_ready.go:82] duration metric: took 2.504842416s for pod "coredns-7c65d6cfc9-68gld" in "kube-system" namespace to be "Ready" ...
	I0913 11:21:20.970374    1774 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-166000" in "kube-system" namespace to be "Ready" ...
	I0913 11:21:21.090335    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:21.106070    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:21.493885    1774 pod_ready.go:93] pod "etcd-addons-166000" in "kube-system" namespace has status "Ready":"True"
	I0913 11:21:21.493895    1774 pod_ready.go:82] duration metric: took 523.536875ms for pod "etcd-addons-166000" in "kube-system" namespace to be "Ready" ...
	I0913 11:21:21.493902    1774 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-166000" in "kube-system" namespace to be "Ready" ...
	I0913 11:21:21.611553    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:21.639383    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:21.986190    1774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.474819125s)
	I0913 11:21:21.986219    1774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.338161375s)
	I0913 11:21:21.986313    1774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.1669755s)
	I0913 11:21:21.986324    1774 addons.go:475] Verifying addon metrics-server=true in "addons-166000"
	I0913 11:21:21.986496    1774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.140392167s)
	I0913 11:21:21.986507    1774 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-166000"
	I0913 11:21:21.986602    1774 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.679266667s)
	I0913 11:21:21.991617    1774 out.go:177] * Verifying csi-hostpath-driver addon...
	I0913 11:21:22.004197    1774 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0913 11:21:22.056755    1774 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0913 11:21:22.056764    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:22.144775    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:22.144892    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:22.507695    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:22.607836    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:22.607970    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:23.007792    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:23.091049    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:23.105496    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:23.498667    1774 pod_ready.go:103] pod "kube-apiserver-addons-166000" in "kube-system" namespace has status "Ready":"False"
	I0913 11:21:23.507393    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:23.607793    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:23.607930    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:24.009449    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:24.091273    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:24.105902    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:24.508306    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:24.608576    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:24.608794    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:25.007732    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:25.090792    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:25.105388    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:25.499512    1774 pod_ready.go:93] pod "kube-apiserver-addons-166000" in "kube-system" namespace has status "Ready":"True"
	I0913 11:21:25.499522    1774 pod_ready.go:82] duration metric: took 4.005763083s for pod "kube-apiserver-addons-166000" in "kube-system" namespace to be "Ready" ...
	I0913 11:21:25.499535    1774 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-166000" in "kube-system" namespace to be "Ready" ...
	I0913 11:21:25.501843    1774 pod_ready.go:93] pod "kube-controller-manager-addons-166000" in "kube-system" namespace has status "Ready":"True"
	I0913 11:21:25.501850    1774 pod_ready.go:82] duration metric: took 2.311125ms for pod "kube-controller-manager-addons-166000" in "kube-system" namespace to be "Ready" ...
	I0913 11:21:25.501854    1774 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lrwqv" in "kube-system" namespace to be "Ready" ...
	I0913 11:21:25.503739    1774 pod_ready.go:93] pod "kube-proxy-lrwqv" in "kube-system" namespace has status "Ready":"True"
	I0913 11:21:25.503744    1774 pod_ready.go:82] duration metric: took 1.887541ms for pod "kube-proxy-lrwqv" in "kube-system" namespace to be "Ready" ...
	I0913 11:21:25.503748    1774 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-166000" in "kube-system" namespace to be "Ready" ...
	I0913 11:21:25.506737    1774 pod_ready.go:93] pod "kube-scheduler-addons-166000" in "kube-system" namespace has status "Ready":"True"
	I0913 11:21:25.506744    1774 pod_ready.go:82] duration metric: took 2.992584ms for pod "kube-scheduler-addons-166000" in "kube-system" namespace to be "Ready" ...
	I0913 11:21:25.506747    1774 pod_ready.go:39] duration metric: took 8.05282175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 11:21:25.506757    1774 api_server.go:52] waiting for apiserver process to appear ...
	I0913 11:21:25.506829    1774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 11:21:25.507005    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:25.513377    1774 api_server.go:72] duration metric: took 8.458294125s to wait for apiserver process to appear ...
	I0913 11:21:25.513387    1774 api_server.go:88] waiting for apiserver healthz status ...
	I0913 11:21:25.513395    1774 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0913 11:21:25.515807    1774 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0913 11:21:25.516486    1774 api_server.go:141] control plane version: v1.31.1
	I0913 11:21:25.516492    1774 api_server.go:131] duration metric: took 3.10275ms to wait for apiserver health ...
	I0913 11:21:25.516496    1774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 11:21:25.521708    1774 system_pods.go:59] 17 kube-system pods found
	I0913 11:21:25.521716    1774 system_pods.go:61] "coredns-7c65d6cfc9-68gld" [c3a9d142-3fc1-43eb-8a46-dfb7b2f58420] Running
	I0913 11:21:25.521721    1774 system_pods.go:61] "csi-hostpath-attacher-0" [764b9d83-7fe3-45bf-91ef-b728d61e5134] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 11:21:25.521724    1774 system_pods.go:61] "csi-hostpath-resizer-0" [01101e88-6c00-4890-88f4-4883851afc5f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 11:21:25.521727    1774 system_pods.go:61] "csi-hostpathplugin-g89nf" [3b9ad04e-2e80-419f-99b3-21de191e9474] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 11:21:25.521731    1774 system_pods.go:61] "etcd-addons-166000" [08a42d3d-61f8-4aa7-9035-6b1dba2fb48e] Running
	I0913 11:21:25.521733    1774 system_pods.go:61] "kube-apiserver-addons-166000" [0d0a2936-969c-49b6-abac-6e93df9d76ee] Running
	I0913 11:21:25.521735    1774 system_pods.go:61] "kube-controller-manager-addons-166000" [b4f5a425-12ca-4fee-95db-5df9c063b8d1] Running
	I0913 11:21:25.521745    1774 system_pods.go:61] "kube-ingress-dns-minikube" [f8b3eee7-945d-4a92-b5d8-23728ddf85a7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0913 11:21:25.521749    1774 system_pods.go:61] "kube-proxy-lrwqv" [591ffb82-812e-4f97-8fde-13682feb3085] Running
	I0913 11:21:25.521751    1774 system_pods.go:61] "kube-scheduler-addons-166000" [5115cc6f-9e34-455d-9c42-be62ba9706ff] Running
	I0913 11:21:25.521754    1774 system_pods.go:61] "metrics-server-84c5f94fbc-pzfzj" [3fb84404-de93-448a-b27c-5ae8d61b4079] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 11:21:25.521757    1774 system_pods.go:61] "nvidia-device-plugin-daemonset-jfh67" [363aebd6-7e4c-4855-b561-334e37188d45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0913 11:21:25.521761    1774 system_pods.go:61] "registry-66c9cd494c-ldnch" [c3d43aaf-f9da-480c-814d-b04f250bd74e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 11:21:25.521764    1774 system_pods.go:61] "registry-proxy-q27zh" [f3c7273d-3e4f-4852-9991-fc159c509855] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 11:21:25.521768    1774 system_pods.go:61] "snapshot-controller-56fcc65765-m8nv8" [1c1be40c-2dd9-45c3-aa25-cef0a01a0572] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 11:21:25.521771    1774 system_pods.go:61] "snapshot-controller-56fcc65765-xdhgd" [dec57474-5d33-43b1-b2e2-ae00708f569a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 11:21:25.521772    1774 system_pods.go:61] "storage-provisioner" [be0e98be-5e26-4b5a-a746-679ab0021d6b] Running
	I0913 11:21:25.521775    1774 system_pods.go:74] duration metric: took 5.276417ms to wait for pod list to return data ...
	I0913 11:21:25.521778    1774 default_sa.go:34] waiting for default service account to be created ...
	I0913 11:21:25.522803    1774 default_sa.go:45] found service account: "default"
	I0913 11:21:25.522808    1774 default_sa.go:55] duration metric: took 1.028542ms for default service account to be created ...
	I0913 11:21:25.522811    1774 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 11:21:25.607048    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:25.607091    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:25.704299    1774 system_pods.go:86] 17 kube-system pods found
	I0913 11:21:25.704309    1774 system_pods.go:89] "coredns-7c65d6cfc9-68gld" [c3a9d142-3fc1-43eb-8a46-dfb7b2f58420] Running
	I0913 11:21:25.704314    1774 system_pods.go:89] "csi-hostpath-attacher-0" [764b9d83-7fe3-45bf-91ef-b728d61e5134] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 11:21:25.704317    1774 system_pods.go:89] "csi-hostpath-resizer-0" [01101e88-6c00-4890-88f4-4883851afc5f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 11:21:25.704321    1774 system_pods.go:89] "csi-hostpathplugin-g89nf" [3b9ad04e-2e80-419f-99b3-21de191e9474] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 11:21:25.704324    1774 system_pods.go:89] "etcd-addons-166000" [08a42d3d-61f8-4aa7-9035-6b1dba2fb48e] Running
	I0913 11:21:25.704327    1774 system_pods.go:89] "kube-apiserver-addons-166000" [0d0a2936-969c-49b6-abac-6e93df9d76ee] Running
	I0913 11:21:25.704329    1774 system_pods.go:89] "kube-controller-manager-addons-166000" [b4f5a425-12ca-4fee-95db-5df9c063b8d1] Running
	I0913 11:21:25.704332    1774 system_pods.go:89] "kube-ingress-dns-minikube" [f8b3eee7-945d-4a92-b5d8-23728ddf85a7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0913 11:21:25.704333    1774 system_pods.go:89] "kube-proxy-lrwqv" [591ffb82-812e-4f97-8fde-13682feb3085] Running
	I0913 11:21:25.704336    1774 system_pods.go:89] "kube-scheduler-addons-166000" [5115cc6f-9e34-455d-9c42-be62ba9706ff] Running
	I0913 11:21:25.704339    1774 system_pods.go:89] "metrics-server-84c5f94fbc-pzfzj" [3fb84404-de93-448a-b27c-5ae8d61b4079] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 11:21:25.704342    1774 system_pods.go:89] "nvidia-device-plugin-daemonset-jfh67" [363aebd6-7e4c-4855-b561-334e37188d45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0913 11:21:25.704345    1774 system_pods.go:89] "registry-66c9cd494c-ldnch" [c3d43aaf-f9da-480c-814d-b04f250bd74e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 11:21:25.704348    1774 system_pods.go:89] "registry-proxy-q27zh" [f3c7273d-3e4f-4852-9991-fc159c509855] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 11:21:25.704351    1774 system_pods.go:89] "snapshot-controller-56fcc65765-m8nv8" [1c1be40c-2dd9-45c3-aa25-cef0a01a0572] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 11:21:25.704355    1774 system_pods.go:89] "snapshot-controller-56fcc65765-xdhgd" [dec57474-5d33-43b1-b2e2-ae00708f569a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 11:21:25.704357    1774 system_pods.go:89] "storage-provisioner" [be0e98be-5e26-4b5a-a746-679ab0021d6b] Running
	I0913 11:21:25.704360    1774 system_pods.go:126] duration metric: took 181.552917ms to wait for k8s-apps to be running ...
	I0913 11:21:25.704364    1774 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 11:21:25.704425    1774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 11:21:25.713435    1774 system_svc.go:56] duration metric: took 9.067083ms WaitForService to wait for kubelet
	I0913 11:21:25.713447    1774 kubeadm.go:582] duration metric: took 8.658372167s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 11:21:25.713456    1774 node_conditions.go:102] verifying NodePressure condition ...
	I0913 11:21:25.898329    1774 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 11:21:25.898338    1774 node_conditions.go:123] node cpu capacity is 2
	I0913 11:21:25.898343    1774 node_conditions.go:105] duration metric: took 184.891709ms to run NodePressure ...
	I0913 11:21:25.898349    1774 start.go:241] waiting for startup goroutines ...
	I0913 11:21:26.008181    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:26.091158    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:26.105655    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:26.508606    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:26.560618    1774 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0913 11:21:26.560634    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:26.589738    1774 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0913 11:21:26.595645    1774 addons.go:234] Setting addon gcp-auth=true in "addons-166000"
	I0913 11:21:26.595663    1774 host.go:66] Checking if "addons-166000" exists ...
	I0913 11:21:26.596404    1774 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0913 11:21:26.596411    1774 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/addons-166000/id_rsa Username:docker}
	I0913 11:21:26.609191    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:26.609322    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:26.622639    1774 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 11:21:26.625512    1774 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0913 11:21:26.629487    1774 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 11:21:26.629493    1774 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0913 11:21:26.636329    1774 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 11:21:26.636338    1774 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0913 11:21:26.642807    1774 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 11:21:26.642813    1774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0913 11:21:26.648507    1774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 11:21:26.918911    1774 addons.go:475] Verifying addon gcp-auth=true in "addons-166000"
	I0913 11:21:26.922652    1774 out.go:177] * Verifying gcp-auth addon...
	I0913 11:21:26.929974    1774 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0913 11:21:26.930999    1774 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 11:21:27.050480    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:27.151154    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:27.151281    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:27.535424    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:27.634420    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:27.634627    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:28.008007    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:28.091077    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:28.106160    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:28.535065    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:28.634888    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:28.635022    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:29.007330    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:29.091154    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:29.106074    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:29.535599    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:29.590964    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:29.603908    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:30.123210    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:30.123350    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:30.123483    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:30.506822    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:30.590656    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:30.605142    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:31.008127    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:31.090861    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:31.105063    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:31.509236    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:31.590763    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:31.603155    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:32.008200    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:32.090721    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:32.104892    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:32.508291    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:32.590618    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:32.604849    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:33.010527    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:33.091446    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:33.105473    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:33.508229    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:33.590922    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:33.604649    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:34.033620    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:34.089209    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:34.104977    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:34.509958    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:34.588832    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:34.605068    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:35.007835    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:35.090516    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:35.104844    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:35.507948    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:35.590576    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:35.604852    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:36.008171    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:36.090668    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:36.105038    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:36.507998    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:36.590293    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:36.604885    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:37.008053    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:37.090638    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:37.103714    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:37.507965    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:37.590519    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:37.605282    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:38.006637    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:38.089944    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:38.104779    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:38.513071    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:38.591718    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:38.605466    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:39.007758    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:39.090130    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:39.104728    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:39.508597    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:39.590596    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:39.605831    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:40.007775    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:40.090362    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:40.103778    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:40.507648    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:40.590363    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:40.603032    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:41.008188    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:41.090203    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:41.103683    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:41.514086    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:41.592612    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:41.693758    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:42.034864    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:42.090577    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:42.105185    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:42.507814    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:42.590595    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:42.691263    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:43.012184    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:43.090981    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:43.104991    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:43.510708    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:43.591700    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:43.606065    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:44.007465    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:44.090681    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:44.103996    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:44.507888    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:44.590384    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:44.606507    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:45.007658    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:45.089947    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:45.104707    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:45.507356    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:45.589379    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:45.604663    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:46.010733    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:46.089226    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:46.104597    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:46.507653    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:46.623906    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:46.623967    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:47.007486    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:47.090085    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:47.104411    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:47.507607    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:47.589912    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 11:21:47.604579    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:48.007633    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:48.089781    1774 kapi.go:107] duration metric: took 28.003457834s to wait for kubernetes.io/minikube-addons=registry ...
	I0913 11:21:48.104483    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:48.507216    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:48.604571    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:49.008687    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:49.104944    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:49.532833    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:49.604878    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:50.007326    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:50.104170    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:50.507179    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:50.604566    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:51.007419    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:51.106827    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:51.507000    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:51.606246    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:52.007276    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:52.104600    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:52.506993    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:52.604598    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:53.007399    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:53.104771    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:53.507115    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:53.604470    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:54.034200    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:54.104489    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:54.507516    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:54.605057    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:55.033780    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:55.104563    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:55.509813    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:55.606462    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:56.006983    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:56.104428    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:56.507251    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:56.603280    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:57.006055    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:57.104668    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:57.507356    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:57.607295    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:58.034183    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:58.103847    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:58.510479    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:58.604618    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:59.007263    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:59.104320    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:21:59.507309    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:21:59.604493    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:00.006645    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:00.104812    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:00.509642    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:00.606668    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:01.009171    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:01.104827    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:01.507618    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:01.604154    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:02.007262    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:02.103933    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:02.507408    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:02.604380    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:03.006130    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:03.104637    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:03.506401    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:03.603967    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:04.006954    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:04.104335    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:04.507478    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:04.604204    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:05.007541    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:05.104276    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:05.506704    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:05.604321    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:06.007744    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:06.107633    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:06.512124    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:06.606335    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:07.037357    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:07.105249    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:07.507217    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:07.604736    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:08.007626    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:08.102280    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:08.504974    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:08.604696    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:09.034008    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:09.104202    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:09.506502    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:09.603920    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:10.006232    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:10.104046    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:10.506770    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:10.604172    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:11.033661    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:11.104334    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:11.506309    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:11.605230    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:12.006394    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:12.104035    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:12.507988    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:12.603846    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:13.006878    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:13.103981    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:13.506628    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:13.604003    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:14.006656    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:14.103490    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:14.506638    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:14.604123    1774 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 11:22:15.006438    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:15.104382    1774 kapi.go:107] duration metric: took 55.504554209s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0913 11:22:15.509398    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:16.006719    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:16.506658    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:17.006074    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:17.506439    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:18.010934    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:18.506549    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:19.006266    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:19.533212    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:20.006886    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:20.506426    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:21.006055    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:21.506241    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:22.006525    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:22.532825    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:23.008649    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:23.506118    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:24.006180    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:24.506294    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:25.006888    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 11:22:25.506465    1774 kapi.go:107] duration metric: took 1m3.504596166s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0913 11:22:48.930873    1774 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 11:22:48.930883    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:49.431680    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:49.934840    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:50.435345    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:50.931896    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:51.429702    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:51.931129    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:52.432123    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:52.932873    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:53.430868    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:53.932906    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:54.430880    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:54.929535    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:55.435021    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:55.931006    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:56.433958    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:56.934212    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:57.436399    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:57.934074    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:58.436552    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:58.931181    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:59.432643    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:22:59.932594    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:00.437243    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:00.935992    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:01.430628    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:01.930384    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:02.432021    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:02.934689    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:03.434402    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:03.932840    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:04.435105    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:04.930055    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:05.434604    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:05.931843    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:06.435035    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:06.932930    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:07.442848    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:07.931102    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:08.436633    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:08.931673    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:09.430780    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:09.931356    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:10.436532    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:10.936395    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:11.431666    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:11.935782    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:12.435725    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:12.935263    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:13.436758    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:13.933044    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:14.435222    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:14.931506    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:15.440334    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:15.931572    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:16.437054    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:16.938974    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:17.435249    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:17.933656    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:18.437046    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:18.932107    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:19.431792    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:19.936836    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:20.432593    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:20.933370    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:21.432839    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:21.931217    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:22.431833    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:22.935279    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:23.435509    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:23.932389    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:24.434897    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:24.936763    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:25.432212    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:25.935694    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:26.435380    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:26.934340    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:27.437978    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:27.934271    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:28.435894    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:28.934478    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:29.429946    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:29.933135    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:30.429131    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:30.930104    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:31.429616    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:31.929687    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:32.428990    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:32.933072    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:33.438113    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:33.930783    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:34.429407    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:34.931175    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:35.431101    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:35.934818    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:36.436748    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:36.931974    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:37.435158    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:37.933379    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:38.433864    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:38.929069    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:39.433225    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:39.933291    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:40.428948    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:40.929442    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:41.429246    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:41.929689    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:42.431176    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:42.930862    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:43.430451    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:43.930421    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:44.429997    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:44.929817    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:45.435730    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:45.932208    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:46.434747    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:46.934096    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:47.433505    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:47.930677    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:48.430502    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:48.935487    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:49.430644    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:49.930183    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:50.436197    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:50.932134    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:51.428894    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:51.929300    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:52.435927    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:52.934187    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:53.429544    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:53.928482    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:54.428010    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:54.928372    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:55.428777    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:55.928323    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:56.428251    1774 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 11:23:56.928359    1774 kapi.go:107] duration metric: took 2m30.003883833s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0913 11:23:56.932470    1774 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-166000 cluster.
	I0913 11:23:56.940459    1774 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0913 11:23:56.945392    1774 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0913 11:23:56.948504    1774 out.go:177] * Enabled addons: yakd, inspektor-gadget, storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, default-storageclass, volcano, metrics-server, volumesnapshots, storage-provisioner-rancher, registry, ingress, csi-hostpath-driver, gcp-auth
	I0913 11:23:56.952443    1774 addons.go:510] duration metric: took 2m39.9028745s for enable addons: enabled=[yakd inspektor-gadget storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner default-storageclass volcano metrics-server volumesnapshots storage-provisioner-rancher registry ingress csi-hostpath-driver gcp-auth]
	I0913 11:23:56.952457    1774 start.go:246] waiting for cluster config update ...
	I0913 11:23:56.952466    1774 start.go:255] writing updated cluster config ...
	I0913 11:23:56.952901    1774 ssh_runner.go:195] Run: rm -f paused
	I0913 11:23:57.107781    1774 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0913 11:23:57.110516    1774 out.go:201] 
	W0913 11:23:57.113450    1774 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0913 11:23:57.117438    1774 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0913 11:23:57.125456    1774 out.go:177] * Done! kubectl is now configured to use "addons-166000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 13 18:33:20 addons-166000 dockerd[1289]: time="2024-09-13T18:33:20.245899309Z" level=warning msg="cleaning up after shim disconnected" id=fb40ef952956bbc2088895813a84ce46b844685f118e676df3fd8a2c5999d338 namespace=moby
	Sep 13 18:33:20 addons-166000 dockerd[1289]: time="2024-09-13T18:33:20.245904684Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 18:33:30 addons-166000 dockerd[1282]: time="2024-09-13T18:33:30.308531441Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 18:33:30 addons-166000 dockerd[1282]: time="2024-09-13T18:33:30.318033271Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 18:33:47 addons-166000 dockerd[1282]: time="2024-09-13T18:33:47.106213858Z" level=info msg="ignoring event" container=6eda39fab5ff73edea8684d6bcf8dab8b8aaa1c5ccad05ed9c97a26e057084dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.107375882Z" level=info msg="shim disconnected" id=6eda39fab5ff73edea8684d6bcf8dab8b8aaa1c5ccad05ed9c97a26e057084dd namespace=moby
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.107431047Z" level=warning msg="cleaning up after shim disconnected" id=6eda39fab5ff73edea8684d6bcf8dab8b8aaa1c5ccad05ed9c97a26e057084dd namespace=moby
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.107435922Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 18:33:47 addons-166000 dockerd[1282]: time="2024-09-13T18:33:47.246547966Z" level=info msg="ignoring event" container=b11200cbea55c35dededd36e352355b287c67cfe4a2751623748abc8974ee306 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.246702672Z" level=info msg="shim disconnected" id=b11200cbea55c35dededd36e352355b287c67cfe4a2751623748abc8974ee306 namespace=moby
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.246759171Z" level=warning msg="cleaning up after shim disconnected" id=b11200cbea55c35dededd36e352355b287c67cfe4a2751623748abc8974ee306 namespace=moby
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.246775171Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.268835196Z" level=info msg="shim disconnected" id=879037d614c56e9dc5df99f57060c251c61579797bbab50ba3d3665098c5d970 namespace=moby
	Sep 13 18:33:47 addons-166000 dockerd[1282]: time="2024-09-13T18:33:47.268943486Z" level=info msg="ignoring event" container=879037d614c56e9dc5df99f57060c251c61579797bbab50ba3d3665098c5d970 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.268999652Z" level=warning msg="cleaning up after shim disconnected" id=879037d614c56e9dc5df99f57060c251c61579797bbab50ba3d3665098c5d970 namespace=moby
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.269012402Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.283914040Z" level=warning msg="cleanup warnings time=\"2024-09-13T18:33:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 13 18:33:47 addons-166000 dockerd[1282]: time="2024-09-13T18:33:47.326700029Z" level=info msg="ignoring event" container=2a243ee23dc66d2d66759a900d0f644916f6276e66316f917f9b7dfd349b791c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.326764153Z" level=info msg="shim disconnected" id=2a243ee23dc66d2d66759a900d0f644916f6276e66316f917f9b7dfd349b791c namespace=moby
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.327088647Z" level=warning msg="cleaning up after shim disconnected" id=2a243ee23dc66d2d66759a900d0f644916f6276e66316f917f9b7dfd349b791c namespace=moby
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.327105605Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 18:33:47 addons-166000 dockerd[1282]: time="2024-09-13T18:33:47.383497836Z" level=info msg="ignoring event" container=b24f7662065a0e3988d5d36ab78aa1700a1291c700c56e28007af653fad20087 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.383749124Z" level=info msg="shim disconnected" id=b24f7662065a0e3988d5d36ab78aa1700a1291c700c56e28007af653fad20087 namespace=moby
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.383783665Z" level=warning msg="cleaning up after shim disconnected" id=b24f7662065a0e3988d5d36ab78aa1700a1291c700c56e28007af653fad20087 namespace=moby
	Sep 13 18:33:47 addons-166000 dockerd[1289]: time="2024-09-13T18:33:47.383787957Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	e354f6e738249       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                                              34 seconds ago      Exited              helper-pod                               0                   d09b4ed8223be       helper-pod-create-pvc-71858af5-0dcd-4beb-8a2f-1b15243c2fcb
	55ea4509a11bc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            51 seconds ago      Exited              gadget                                   7                   b4a5b709c746d       gadget-bxqh8
	7351c8bb44faa       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   d97ff2708e804       gcp-auth-89d5ffd79-vwffl
	cc76d1404f5e9       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   e099bf7879f16       csi-hostpathplugin-g89nf
	47965e735bf1e       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   e099bf7879f16       csi-hostpathplugin-g89nf
	5507850f9a816       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   e099bf7879f16       csi-hostpathplugin-g89nf
	10af87e6f8c23       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   e099bf7879f16       csi-hostpathplugin-g89nf
	554aa5b128dda       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   e099bf7879f16       csi-hostpathplugin-g89nf
	a2a4fe2ea4fe6       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             11 minutes ago      Running             controller                               0                   7dbae46e9eab5       ingress-nginx-controller-bc57996ff-6g77t
	38022d51cbf31       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   91614fb08828f       csi-hostpath-resizer-0
	0ff66bdd819bf       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   e099bf7879f16       csi-hostpathplugin-g89nf
	74ee293d7ea35       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   66088f432819a       csi-hostpath-attacher-0
	64a7fa5c60f00       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       11 minutes ago      Running             local-path-provisioner                   0                   f57b0a37108cc       local-path-provisioner-86d989889c-jb8xs
	4f93a79189962       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   e1dd72273f838       snapshot-controller-56fcc65765-m8nv8
	a824f7394db67       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   0b899212c50b9       metrics-server-84c5f94fbc-pzfzj
	258c4f1ae86ec       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   487beca32bc7e       snapshot-controller-56fcc65765-xdhgd
	d8d08c1e03bf0       420193b27261a                                                                                                                                11 minutes ago      Exited              patch                                    1                   cc6d4c88f0a49       ingress-nginx-admission-patch-b54pd
	11751219f998b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   11 minutes ago      Exited              create                                   0                   5149b1fe8c6d2       ingress-nginx-admission-create-p4mb4
	879037d614c56       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              12 minutes ago      Exited              registry-proxy                           0                   b24f7662065a0       registry-proxy-q27zh
	b11200cbea55c       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             12 minutes ago      Exited              registry                                 0                   2a243ee23dc66       registry-66c9cd494c-ldnch
	ece424def8be2       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               12 minutes ago      Running             cloud-spanner-emulator                   0                   3af879d06fb0e       cloud-spanner-emulator-769b77f747-wz7nx
	60f1d7b10bf37       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             12 minutes ago      Running             minikube-ingress-dns                     0                   6d9cf800aa767       kube-ingress-dns-minikube
	553ae7298bdd9       ba04bb24b9575                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   ef912e2df7487       storage-provisioner
	19e29cc0979bc       2f6c962e7b831                                                                                                                                12 minutes ago      Running             coredns                                  0                   e377b91e5c5e6       coredns-7c65d6cfc9-68gld
	43ad8181af94c       24a140c548c07                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   93bcabbc6168c       kube-proxy-lrwqv
	3d88eb1da782a       d3f53a98c0a9d                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   5a4ae38aaba64       kube-apiserver-addons-166000
	e5efdc140b9fb       27e3830e14027                                                                                                                                12 minutes ago      Running             etcd                                     0                   a13948a35b9d1       etcd-addons-166000
	4adc0d39784b8       279f381cb3736                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   243891df904d0       kube-controller-manager-addons-166000
	0d9ab4faf622c       7f8aa378bb47d                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   d04de6d4e35d6       kube-scheduler-addons-166000
	
	
	==> controller_ingress [a2a4fe2ea4fe] <==
	W0913 18:22:14.969956       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0913 18:22:14.970036       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0913 18:22:14.972905       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0913 18:22:15.078294       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0913 18:22:15.093837       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0913 18:22:15.099025       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0913 18:22:15.108753       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"b0602134-80e0-44c7-8771-9eb9d80e6eec", APIVersion:"v1", ResourceVersion:"543", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0913 18:22:15.109238       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"6ab384a0-aa9a-4736-bd04-65a594b4b9fd", APIVersion:"v1", ResourceVersion:"544", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0913 18:22:15.109301       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"95306106-d59b-417d-952d-b02300f66336", APIVersion:"v1", ResourceVersion:"545", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0913 18:22:16.300539       7 nginx.go:317] "Starting NGINX process"
	I0913 18:22:16.300698       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0913 18:22:16.300931       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0913 18:22:16.301034       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0913 18:22:16.308423       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0913 18:22:16.308537       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-6g77t"
	I0913 18:22:16.311177       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-6g77t" node="addons-166000"
	I0913 18:22:16.330163       7 controller.go:213] "Backend successfully reloaded"
	I0913 18:22:16.330261       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0913 18:22:16.330289       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-6g77t", UID:"5d9ad920-b2d4-41a5-8149-e3bd2fc566ce", APIVersion:"v1", ResourceVersion:"1200", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [19e29cc0979b] <==
	[INFO] 127.0.0.1:43748 - 57691 "HINFO IN 4869188573154526494.1794802850522527332. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010145245s
	[INFO] 10.244.0.8:45126 - 52716 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000118285s
	[INFO] 10.244.0.8:45126 - 50158 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000151711s
	[INFO] 10.244.0.8:54128 - 61432 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000026717s
	[INFO] 10.244.0.8:54128 - 60155 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104531s
	[INFO] 10.244.0.8:43893 - 12694 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000026716s
	[INFO] 10.244.0.8:43893 - 25237 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042596s
	[INFO] 10.244.0.8:44139 - 15773 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000152794s
	[INFO] 10.244.0.8:44139 - 27804 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000026632s
	[INFO] 10.244.0.8:58630 - 42757 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000028841s
	[INFO] 10.244.0.8:58630 - 64007 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000042679s
	[INFO] 10.244.0.8:53882 - 60732 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000012712s
	[INFO] 10.244.0.8:53882 - 3902 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000048347s
	[INFO] 10.244.0.8:53121 - 64770 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000017755s
	[INFO] 10.244.0.8:53121 - 52737 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000025132s
	[INFO] 10.244.0.8:51160 - 19660 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00001192s
	[INFO] 10.244.0.8:51160 - 57551 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000030467s
	[INFO] 10.244.0.25:45073 - 38506 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000455346s
	[INFO] 10.244.0.25:46175 - 50556 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000391416s
	[INFO] 10.244.0.25:41803 - 51326 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000057554s
	[INFO] 10.244.0.25:48717 - 2536 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000027547s
	[INFO] 10.244.0.25:49506 - 26872 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000031131s
	[INFO] 10.244.0.25:58666 - 3417 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000023422s
	[INFO] 10.244.0.25:34617 - 24625 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001287648s
	[INFO] 10.244.0.25:34002 - 60104 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001252307s
	
	
	==> describe nodes <==
	Name:               addons-166000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-166000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=addons-166000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T11_21_11_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-166000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-166000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:21:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-166000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:33:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:33:47 +0000   Fri, 13 Sep 2024 18:21:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:33:47 +0000   Fri, 13 Sep 2024 18:21:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:33:47 +0000   Fri, 13 Sep 2024 18:21:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:33:47 +0000   Fri, 13 Sep 2024 18:21:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-166000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	System Info:
	  Machine ID:                 b92ed245eae146e0b5082a4f3d86a761
	  System UUID:                b92ed245eae146e0b5082a4f3d86a761
	  Boot ID:                    070ebd42-91ac-4383-a0d7-eda47395f303
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-769b77f747-wz7nx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     registry-test                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  gadget                      gadget-bxqh8                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-vwffl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-6g77t    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-68gld                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-g89nf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-166000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-166000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-166000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-lrwqv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-166000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-pzfzj             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-m8nv8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-xdhgd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-jb8xs     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-166000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-166000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-166000 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-166000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-166000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-166000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-166000 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-166000 event: Registered Node addons-166000 in Controller
	
	
	==> dmesg <==
	[  +0.170782] systemd-fstab-generator[2277]: Ignoring "noauto" option for root device
	[  +4.808219] kauditd_printk_skb: 261 callbacks suppressed
	[  +4.998233] kauditd_printk_skb: 83 callbacks suppressed
	[  +5.409337] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.336087] kauditd_printk_skb: 9 callbacks suppressed
	[ +11.143187] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.298185] kauditd_printk_skb: 2 callbacks suppressed
	[Sep13 18:22] kauditd_printk_skb: 13 callbacks suppressed
	[  +8.796254] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.443272] kauditd_printk_skb: 16 callbacks suppressed
	[  +9.260434] kauditd_printk_skb: 31 callbacks suppressed
	[Sep13 18:23] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.020466] kauditd_printk_skb: 46 callbacks suppressed
	[ +12.798261] kauditd_printk_skb: 2 callbacks suppressed
	[Sep13 18:24] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.268580] kauditd_printk_skb: 2 callbacks suppressed
	[ +15.920620] kauditd_printk_skb: 20 callbacks suppressed
	[ +20.141959] kauditd_printk_skb: 2 callbacks suppressed
	[Sep13 18:27] kauditd_printk_skb: 2 callbacks suppressed
	[Sep13 18:32] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.186707] kauditd_printk_skb: 11 callbacks suppressed
	[ +14.116969] kauditd_printk_skb: 33 callbacks suppressed
	[Sep13 18:33] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.489573] kauditd_printk_skb: 23 callbacks suppressed
	[ +30.440624] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [e5efdc140b9f] <==
	{"level":"info","ts":"2024-09-13T18:21:07.626353Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","added-peer-id":"c46d288d2fcb0590","added-peer-peer-urls":["https://192.168.105.2:2380"]}
	{"level":"info","ts":"2024-09-13T18:21:08.535935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-13T18:21:08.536060Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-13T18:21:08.536110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-09-13T18:21:08.536136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-09-13T18:21:08.536150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-13T18:21:08.536174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-09-13T18:21:08.536200Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-13T18:21:08.537697Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:21:08.538299Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-166000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T18:21:08.538548Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:21:08.538634Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T18:21:08.538823Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:21:08.538918Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:21:08.538607Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T18:21:08.539232Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T18:21:08.539262Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T18:21:08.540450Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T18:21:08.540451Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T18:21:08.542884Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-09-13T18:21:08.543684Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T18:22:12.818812Z","caller":"traceutil/trace.go:171","msg":"trace[1780920240] transaction","detail":"{read_only:false; response_revision:1193; number_of_response:1; }","duration":"100.312615ms","start":"2024-09-13T18:22:12.718488Z","end":"2024-09-13T18:22:12.818800Z","steps":["trace[1780920240] 'process raft request'  (duration: 100.189748ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T18:31:08.589395Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1874}
	{"level":"info","ts":"2024-09-13T18:31:08.680785Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1874,"took":"88.568672ms","hash":3494814923,"current-db-size-bytes":8814592,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":4911104,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-13T18:31:08.680818Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3494814923,"revision":1874,"compact-revision":-1}
	
	
	==> gcp-auth [7351c8bb44fa] <==
	2024/09/13 18:23:56 GCP Auth Webhook started!
	2024/09/13 18:24:12 Ready to marshal response ...
	2024/09/13 18:24:12 Ready to write response ...
	2024/09/13 18:24:13 Ready to marshal response ...
	2024/09/13 18:24:13 Ready to write response ...
	2024/09/13 18:24:35 Ready to marshal response ...
	2024/09/13 18:24:35 Ready to write response ...
	2024/09/13 18:24:35 Ready to marshal response ...
	2024/09/13 18:24:35 Ready to write response ...
	2024/09/13 18:24:35 Ready to marshal response ...
	2024/09/13 18:24:35 Ready to write response ...
	2024/09/13 18:32:37 Ready to marshal response ...
	2024/09/13 18:32:37 Ready to write response ...
	2024/09/13 18:32:37 Ready to marshal response ...
	2024/09/13 18:32:37 Ready to write response ...
	2024/09/13 18:32:37 Ready to marshal response ...
	2024/09/13 18:32:37 Ready to write response ...
	2024/09/13 18:32:47 Ready to marshal response ...
	2024/09/13 18:32:47 Ready to write response ...
	2024/09/13 18:33:11 Ready to marshal response ...
	2024/09/13 18:33:11 Ready to write response ...
	2024/09/13 18:33:11 Ready to marshal response ...
	2024/09/13 18:33:11 Ready to write response ...
	2024/09/13 18:33:20 Ready to marshal response ...
	2024/09/13 18:33:20 Ready to write response ...
	
	
	==> kernel <==
	 18:33:47 up 12 min,  0 users,  load average: 0.46, 0.56, 0.44
	Linux addons-166000 5.10.207 #1 SMP PREEMPT Thu Sep 12 17:20:51 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3d88eb1da782] <==
	W0913 18:23:30.012805       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.234.139:443: connect: connection refused
	E0913 18:23:30.013849       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.234.139:443: connect: connection refused" logger="UnhandledError"
	I0913 18:24:12.417141       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0913 18:24:12.428216       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0913 18:24:25.703992       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0913 18:24:25.735808       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0913 18:24:25.871332       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0913 18:24:25.885981       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0913 18:24:25.895543       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0913 18:24:26.061329       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0913 18:24:26.092004       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0913 18:24:26.183745       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0913 18:24:26.188692       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0913 18:24:26.761354       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0913 18:24:26.909004       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0913 18:24:27.125501       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0913 18:24:27.187240       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0913 18:24:27.189275       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0913 18:24:27.189277       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0913 18:24:27.326823       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0913 18:32:37.126513       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.57.19"}
	E0913 18:33:21.480640       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 18:33:21.487462       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 18:33:21.493006       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 18:33:36.501169       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [4adc0d39784b] <==
	I0913 18:32:48.381211       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="1.5µs"
	W0913 18:32:48.874022       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:32:48.874063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:32:55.515049       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:32:55.515169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 18:32:58.454173       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0913 18:32:59.672896       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="2.041µs"
	W0913 18:33:04.791387       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:04.791425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:33:08.857302       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:08.857452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 18:33:09.848734       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0913 18:33:14.144906       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:14.145133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:33:18.408390       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:18.408414       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 18:33:20.695340       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="1.792µs"
	W0913 18:33:40.369547       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:40.369684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:33:41.047657       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:41.047745       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 18:33:42.141161       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 18:33:42.141214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 18:33:47.100434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-166000"
	I0913 18:33:47.218559       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="1.458µs"
	
	
	==> kube-proxy [43ad8181af94] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 18:21:17.129555       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 18:21:17.143643       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0913 18:21:17.143672       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:21:17.176770       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 18:21:17.176785       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 18:21:17.176796       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:21:17.177582       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:21:17.177714       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:21:17.177722       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:21:17.178415       1 config.go:199] "Starting service config controller"
	I0913 18:21:17.178434       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:21:17.178478       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:21:17.178482       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:21:17.179195       1 config.go:328] "Starting node config controller"
	I0913 18:21:17.179200       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 18:21:17.279870       1 shared_informer.go:320] Caches are synced for node config
	I0913 18:21:17.279895       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 18:21:17.279939       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [0d9ab4faf622] <==
	W0913 18:21:09.084985       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0913 18:21:09.084990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:09.085004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 18:21:09.085011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:09.085024       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 18:21:09.085032       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:09.085051       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 18:21:09.085059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:09.904866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0913 18:21:09.904911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:09.953833       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 18:21:09.953879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:09.976154       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 18:21:09.976346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:09.996734       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0913 18:21:09.996788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:10.027223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 18:21:10.027454       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:10.067868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 18:21:10.067958       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:10.098155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 18:21:10.098273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 18:21:10.149043       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 18:21:10.149144       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0913 18:21:12.982994       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 18:33:30 addons-166000 kubelet[2047]: E0913 18:33:30.322502    2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ErrImagePull: \"Error response from daemon: Head \\\"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\\\": unauthorized: authentication failed\"" pod="default/registry-test" podUID="8507119e-9eb5-4395-b98a-02109e20ac14"
	Sep 13 18:33:36 addons-166000 kubelet[2047]: I0913 18:33:36.129402    2047 scope.go:117] "RemoveContainer" containerID="55ea4509a11bc27324fb50d16390b3eb51e4dab955beafa6f8718a823321e33f"
	Sep 13 18:33:36 addons-166000 kubelet[2047]: E0913 18:33:36.129492    2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-bxqh8_gadget(bbcad02b-38bd-4248-afae-81c710361048)\"" pod="gadget/gadget-bxqh8" podUID="bbcad02b-38bd-4248-afae-81c710361048"
	Sep 13 18:33:41 addons-166000 kubelet[2047]: E0913 18:33:41.132637    2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="dca016ca-a2eb-48e9-bc07-910b25eeaec2"
	Sep 13 18:33:43 addons-166000 kubelet[2047]: E0913 18:33:43.140707    2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="8507119e-9eb5-4395-b98a-02109e20ac14"
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.289146    2047 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8507119e-9eb5-4395-b98a-02109e20ac14-gcp-creds\") pod \"8507119e-9eb5-4395-b98a-02109e20ac14\" (UID: \"8507119e-9eb5-4395-b98a-02109e20ac14\") "
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.289173    2047 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxcq4\" (UniqueName: \"kubernetes.io/projected/8507119e-9eb5-4395-b98a-02109e20ac14-kube-api-access-bxcq4\") pod \"8507119e-9eb5-4395-b98a-02109e20ac14\" (UID: \"8507119e-9eb5-4395-b98a-02109e20ac14\") "
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.289556    2047 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8507119e-9eb5-4395-b98a-02109e20ac14-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "8507119e-9eb5-4395-b98a-02109e20ac14" (UID: "8507119e-9eb5-4395-b98a-02109e20ac14"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.296957    2047 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8507119e-9eb5-4395-b98a-02109e20ac14-kube-api-access-bxcq4" (OuterVolumeSpecName: "kube-api-access-bxcq4") pod "8507119e-9eb5-4395-b98a-02109e20ac14" (UID: "8507119e-9eb5-4395-b98a-02109e20ac14"). InnerVolumeSpecName "kube-api-access-bxcq4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.390256    2047 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22xxd\" (UniqueName: \"kubernetes.io/projected/c3d43aaf-f9da-480c-814d-b04f250bd74e-kube-api-access-22xxd\") pod \"c3d43aaf-f9da-480c-814d-b04f250bd74e\" (UID: \"c3d43aaf-f9da-480c-814d-b04f250bd74e\") "
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.390387    2047 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8507119e-9eb5-4395-b98a-02109e20ac14-gcp-creds\") on node \"addons-166000\" DevicePath \"\""
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.390395    2047 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bxcq4\" (UniqueName: \"kubernetes.io/projected/8507119e-9eb5-4395-b98a-02109e20ac14-kube-api-access-bxcq4\") on node \"addons-166000\" DevicePath \"\""
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.391208    2047 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3d43aaf-f9da-480c-814d-b04f250bd74e-kube-api-access-22xxd" (OuterVolumeSpecName: "kube-api-access-22xxd") pod "c3d43aaf-f9da-480c-814d-b04f250bd74e" (UID: "c3d43aaf-f9da-480c-814d-b04f250bd74e"). InnerVolumeSpecName "kube-api-access-22xxd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.490985    2047 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-22xxd\" (UniqueName: \"kubernetes.io/projected/c3d43aaf-f9da-480c-814d-b04f250bd74e-kube-api-access-22xxd\") on node \"addons-166000\" DevicePath \"\""
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.591756    2047 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6xtt\" (UniqueName: \"kubernetes.io/projected/f3c7273d-3e4f-4852-9991-fc159c509855-kube-api-access-h6xtt\") pod \"f3c7273d-3e4f-4852-9991-fc159c509855\" (UID: \"f3c7273d-3e4f-4852-9991-fc159c509855\") "
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.592418    2047 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c7273d-3e4f-4852-9991-fc159c509855-kube-api-access-h6xtt" (OuterVolumeSpecName: "kube-api-access-h6xtt") pod "f3c7273d-3e4f-4852-9991-fc159c509855" (UID: "f3c7273d-3e4f-4852-9991-fc159c509855"). InnerVolumeSpecName "kube-api-access-h6xtt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.692126    2047 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-h6xtt\" (UniqueName: \"kubernetes.io/projected/f3c7273d-3e4f-4852-9991-fc159c509855-kube-api-access-h6xtt\") on node \"addons-166000\" DevicePath \"\""
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.762224    2047 scope.go:117] "RemoveContainer" containerID="b11200cbea55c35dededd36e352355b287c67cfe4a2751623748abc8974ee306"
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.788010    2047 scope.go:117] "RemoveContainer" containerID="b11200cbea55c35dededd36e352355b287c67cfe4a2751623748abc8974ee306"
	Sep 13 18:33:47 addons-166000 kubelet[2047]: E0913 18:33:47.788502    2047 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b11200cbea55c35dededd36e352355b287c67cfe4a2751623748abc8974ee306" containerID="b11200cbea55c35dededd36e352355b287c67cfe4a2751623748abc8974ee306"
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.788520    2047 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b11200cbea55c35dededd36e352355b287c67cfe4a2751623748abc8974ee306"} err="failed to get container status \"b11200cbea55c35dededd36e352355b287c67cfe4a2751623748abc8974ee306\": rpc error: code = Unknown desc = Error response from daemon: No such container: b11200cbea55c35dededd36e352355b287c67cfe4a2751623748abc8974ee306"
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.788533    2047 scope.go:117] "RemoveContainer" containerID="879037d614c56e9dc5df99f57060c251c61579797bbab50ba3d3665098c5d970"
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.809700    2047 scope.go:117] "RemoveContainer" containerID="879037d614c56e9dc5df99f57060c251c61579797bbab50ba3d3665098c5d970"
	Sep 13 18:33:47 addons-166000 kubelet[2047]: E0913 18:33:47.810406    2047 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 879037d614c56e9dc5df99f57060c251c61579797bbab50ba3d3665098c5d970" containerID="879037d614c56e9dc5df99f57060c251c61579797bbab50ba3d3665098c5d970"
	Sep 13 18:33:47 addons-166000 kubelet[2047]: I0913 18:33:47.810423    2047 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"879037d614c56e9dc5df99f57060c251c61579797bbab50ba3d3665098c5d970"} err="failed to get container status \"879037d614c56e9dc5df99f57060c251c61579797bbab50ba3d3665098c5d970\": rpc error: code = Unknown desc = Error response from daemon: No such container: 879037d614c56e9dc5df99f57060c251c61579797bbab50ba3d3665098c5d970"
	
	
	==> storage-provisioner [553ae7298bdd] <==
	I0913 18:21:18.561952       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 18:21:18.568070       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 18:21:18.573638       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 18:21:18.585293       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 18:21:18.585411       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-166000_7275c2be-e0d9-40f3-aefb-e6a3ca827cf9!
	I0913 18:21:18.585972       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8b33cab0-ce5c-4c62-bf5a-d2d0301dba38", APIVersion:"v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-166000_7275c2be-e0d9-40f3-aefb-e6a3ca827cf9 became leader
	I0913 18:21:18.685512       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-166000_7275c2be-e0d9-40f3-aefb-e6a3ca827cf9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-166000 -n addons-166000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-166000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-p4mb4 ingress-nginx-admission-patch-b54pd
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-166000 describe pod busybox ingress-nginx-admission-create-p4mb4 ingress-nginx-admission-patch-b54pd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-166000 describe pod busybox ingress-nginx-admission-create-p4mb4 ingress-nginx-admission-patch-b54pd: exit status 1 (41.092125ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-166000/192.168.105.2
	Start Time:       Fri, 13 Sep 2024 11:24:35 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lldq6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lldq6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m12s                  default-scheduler  Successfully assigned default/busybox to addons-166000
	  Normal   Pulling    7m37s (x4 over 9m11s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m37s (x4 over 9m11s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m37s (x4 over 9m11s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m25s (x6 over 9m11s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m6s (x20 over 9m11s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-p4mb4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-b54pd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-166000 describe pod busybox ingress-nginx-admission-create-p4mb4 ingress-nginx-admission-patch-b54pd: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.34s)

                                                
                                    
x
+
TestCertOptions (10.3s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-682000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-682000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (10.027434584s)

                                                
                                                
-- stdout --
	* [cert-options-682000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-682000" primary control-plane node in "cert-options-682000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-682000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-682000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-682000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-682000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-682000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.131458ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-682000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-682000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-682000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-682000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-682000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-682000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.67275ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-682000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-682000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-682000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-682000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-682000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-13 12:06:54.83036 -0700 PDT m=+2823.293751876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-682000 -n cert-options-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-682000 -n cert-options-682000: exit status 7 (30.702542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-682000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-682000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-682000
--- FAIL: TestCertOptions (10.30s)

                                                
                                    
x
+
TestCertExpiration (195.33s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-947000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-947000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.962054167s)

                                                
                                                
-- stdout --
	* [cert-expiration-947000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-947000" primary control-plane node in "cert-expiration-947000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-947000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-947000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-947000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-947000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-947000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.22486775s)

                                                
                                                
-- stdout --
	* [cert-expiration-947000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-947000" primary control-plane node in "cert-expiration-947000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-947000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-947000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-947000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-947000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-947000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-947000" primary control-plane node in "cert-expiration-947000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-947000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-947000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-947000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-13 12:09:54.713896 -0700 PDT m=+3003.184426918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-947000 -n cert-expiration-947000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-947000 -n cert-expiration-947000: exit status 7 (59.1455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-947000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-947000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-947000
--- FAIL: TestCertExpiration (195.33s)

                                                
                                    
x
+
TestDockerFlags (10.12s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-661000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-661000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.876183375s)

                                                
                                                
-- stdout --
	* [docker-flags-661000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-661000" primary control-plane node in "docker-flags-661000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-661000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:06:34.551968    4762 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:06:34.552076    4762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:06:34.552081    4762 out.go:358] Setting ErrFile to fd 2...
	I0913 12:06:34.552084    4762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:06:34.552210    4762 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:06:34.553272    4762 out.go:352] Setting JSON to false
	I0913 12:06:34.569205    4762 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3957,"bootTime":1726250437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:06:34.569274    4762 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:06:34.574863    4762 out.go:177] * [docker-flags-661000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:06:34.580736    4762 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:06:34.580819    4762 notify.go:220] Checking for updates...
	I0913 12:06:34.588693    4762 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:06:34.591649    4762 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:06:34.594704    4762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:06:34.596123    4762 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:06:34.599696    4762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:06:34.602972    4762 config.go:182] Loaded profile config "force-systemd-flag-254000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:06:34.603036    4762 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:06:34.603096    4762 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:06:34.607463    4762 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:06:34.614706    4762 start.go:297] selected driver: qemu2
	I0913 12:06:34.614713    4762 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:06:34.614719    4762 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:06:34.616862    4762 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:06:34.619691    4762 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:06:34.622759    4762 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0913 12:06:34.622781    4762 cni.go:84] Creating CNI manager for ""
	I0913 12:06:34.622810    4762 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:06:34.622814    4762 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 12:06:34.622847    4762 start.go:340] cluster config:
	{Name:docker-flags-661000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-661000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:06:34.626540    4762 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:06:34.633686    4762 out.go:177] * Starting "docker-flags-661000" primary control-plane node in "docker-flags-661000" cluster
	I0913 12:06:34.637649    4762 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:06:34.637662    4762 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:06:34.637672    4762 cache.go:56] Caching tarball of preloaded images
	I0913 12:06:34.637729    4762 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:06:34.637735    4762 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:06:34.637789    4762 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/docker-flags-661000/config.json ...
	I0913 12:06:34.637800    4762 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/docker-flags-661000/config.json: {Name:mk154d95fa709e77af405ae23f1aff0dfe45777a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:06:34.638013    4762 start.go:360] acquireMachinesLock for docker-flags-661000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:06:34.638047    4762 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "docker-flags-661000"
	I0913 12:06:34.638057    4762 start.go:93] Provisioning new machine with config: &{Name:docker-flags-661000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-661000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:06:34.638089    4762 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:06:34.645830    4762 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 12:06:34.663602    4762 start.go:159] libmachine.API.Create for "docker-flags-661000" (driver="qemu2")
	I0913 12:06:34.663636    4762 client.go:168] LocalClient.Create starting
	I0913 12:06:34.663707    4762 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:06:34.663739    4762 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:34.663749    4762 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:34.663792    4762 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:06:34.663815    4762 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:34.663822    4762 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:34.664201    4762 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:06:34.821443    4762 main.go:141] libmachine: Creating SSH key...
	I0913 12:06:34.873340    4762 main.go:141] libmachine: Creating Disk image...
	I0913 12:06:34.873345    4762 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:06:34.873552    4762 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/disk.qcow2
	I0913 12:06:34.882659    4762 main.go:141] libmachine: STDOUT: 
	I0913 12:06:34.882680    4762 main.go:141] libmachine: STDERR: 
	I0913 12:06:34.882735    4762 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/disk.qcow2 +20000M
	I0913 12:06:34.890484    4762 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:06:34.890497    4762 main.go:141] libmachine: STDERR: 
	I0913 12:06:34.890515    4762 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/disk.qcow2
	I0913 12:06:34.890520    4762 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:06:34.890532    4762 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:06:34.890560    4762 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:10:cd:06:61:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/disk.qcow2
	I0913 12:06:34.892179    4762 main.go:141] libmachine: STDOUT: 
	I0913 12:06:34.892194    4762 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:06:34.892212    4762 client.go:171] duration metric: took 228.578583ms to LocalClient.Create
	I0913 12:06:36.894319    4762 start.go:128] duration metric: took 2.25629775s to createHost
	I0913 12:06:36.894379    4762 start.go:83] releasing machines lock for "docker-flags-661000", held for 2.256413s
	W0913 12:06:36.894429    4762 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:06:36.911654    4762 out.go:177] * Deleting "docker-flags-661000" in qemu2 ...
	W0913 12:06:36.941704    4762 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:06:36.941724    4762 start.go:729] Will try again in 5 seconds ...
	I0913 12:06:41.943768    4762 start.go:360] acquireMachinesLock for docker-flags-661000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:06:41.957320    4762 start.go:364] duration metric: took 13.417333ms to acquireMachinesLock for "docker-flags-661000"
	I0913 12:06:41.957470    4762 start.go:93] Provisioning new machine with config: &{Name:docker-flags-661000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-661000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:06:41.957738    4762 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:06:41.972151    4762 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 12:06:42.024640    4762 start.go:159] libmachine.API.Create for "docker-flags-661000" (driver="qemu2")
	I0913 12:06:42.024698    4762 client.go:168] LocalClient.Create starting
	I0913 12:06:42.024815    4762 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:06:42.024875    4762 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:42.024899    4762 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:42.024971    4762 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:06:42.025015    4762 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:42.025030    4762 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:42.025612    4762 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:06:42.207421    4762 main.go:141] libmachine: Creating SSH key...
	I0913 12:06:42.322666    4762 main.go:141] libmachine: Creating Disk image...
	I0913 12:06:42.322672    4762 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:06:42.322872    4762 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/disk.qcow2
	I0913 12:06:42.332015    4762 main.go:141] libmachine: STDOUT: 
	I0913 12:06:42.332031    4762 main.go:141] libmachine: STDERR: 
	I0913 12:06:42.332089    4762 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/disk.qcow2 +20000M
	I0913 12:06:42.339876    4762 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:06:42.339893    4762 main.go:141] libmachine: STDERR: 
	I0913 12:06:42.339903    4762 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/disk.qcow2
	I0913 12:06:42.339910    4762 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:06:42.339918    4762 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:06:42.339949    4762 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:e9:9e:d4:6a:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/docker-flags-661000/disk.qcow2
	I0913 12:06:42.341591    4762 main.go:141] libmachine: STDOUT: 
	I0913 12:06:42.341606    4762 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:06:42.341619    4762 client.go:171] duration metric: took 316.926917ms to LocalClient.Create
	I0913 12:06:44.343787    4762 start.go:128] duration metric: took 2.386115583s to createHost
	I0913 12:06:44.343851    4762 start.go:83] releasing machines lock for "docker-flags-661000", held for 2.386591916s
	W0913 12:06:44.344265    4762 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-661000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-661000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:06:44.364030    4762 out.go:201] 
	W0913 12:06:44.373948    4762 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:06:44.373972    4762 out.go:270] * 
	* 
	W0913 12:06:44.376798    4762 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:06:44.386886    4762 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-661000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-661000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-661000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (80.181208ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-661000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-661000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-661000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-661000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-661000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-661000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-661000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-661000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-661000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.52ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-661000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-661000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-661000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-661000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-661000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-661000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-13 12:06:44.529807 -0700 PDT m=+2812.992790501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-661000 -n docker-flags-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-661000 -n docker-flags-661000: exit status 7 (35.054833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-661000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-661000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-661000
--- FAIL: TestDockerFlags (10.12s)

                                                
                                    
x
+
TestForceSystemdFlag (10.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-254000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-254000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.876453667s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-254000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-254000" primary control-plane node in "force-systemd-flag-254000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-254000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:06:29.493054    4741 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:06:29.493169    4741 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:06:29.493172    4741 out.go:358] Setting ErrFile to fd 2...
	I0913 12:06:29.493175    4741 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:06:29.493319    4741 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:06:29.494392    4741 out.go:352] Setting JSON to false
	I0913 12:06:29.510542    4741 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3952,"bootTime":1726250437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:06:29.510610    4741 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:06:29.517375    4741 out.go:177] * [force-systemd-flag-254000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:06:29.525372    4741 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:06:29.525411    4741 notify.go:220] Checking for updates...
	I0913 12:06:29.532336    4741 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:06:29.540326    4741 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:06:29.543205    4741 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:06:29.546291    4741 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:06:29.549390    4741 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:06:29.551140    4741 config.go:182] Loaded profile config "force-systemd-env-174000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:06:29.551209    4741 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:06:29.551261    4741 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:06:29.555298    4741 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:06:29.562153    4741 start.go:297] selected driver: qemu2
	I0913 12:06:29.562158    4741 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:06:29.562164    4741 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:06:29.564529    4741 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:06:29.567294    4741 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:06:29.570365    4741 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 12:06:29.570379    4741 cni.go:84] Creating CNI manager for ""
	I0913 12:06:29.570407    4741 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:06:29.570415    4741 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 12:06:29.570453    4741 start.go:340] cluster config:
	{Name:force-systemd-flag-254000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-254000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:06:29.574272    4741 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:06:29.582269    4741 out.go:177] * Starting "force-systemd-flag-254000" primary control-plane node in "force-systemd-flag-254000" cluster
	I0913 12:06:29.586326    4741 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:06:29.586340    4741 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:06:29.586351    4741 cache.go:56] Caching tarball of preloaded images
	I0913 12:06:29.586415    4741 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:06:29.586420    4741 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:06:29.586474    4741 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/force-systemd-flag-254000/config.json ...
	I0913 12:06:29.586486    4741 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/force-systemd-flag-254000/config.json: {Name:mk2e6b1673f53f5a56bc344921ebe0403e51b49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:06:29.586914    4741 start.go:360] acquireMachinesLock for force-systemd-flag-254000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:06:29.586952    4741 start.go:364] duration metric: took 29.042µs to acquireMachinesLock for "force-systemd-flag-254000"
	I0913 12:06:29.586963    4741 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-254000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-254000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:06:29.586992    4741 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:06:29.594337    4741 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 12:06:29.612747    4741 start.go:159] libmachine.API.Create for "force-systemd-flag-254000" (driver="qemu2")
	I0913 12:06:29.612769    4741 client.go:168] LocalClient.Create starting
	I0913 12:06:29.612830    4741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:06:29.612860    4741 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:29.612870    4741 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:29.612918    4741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:06:29.612942    4741 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:29.612951    4741 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:29.613389    4741 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:06:29.771343    4741 main.go:141] libmachine: Creating SSH key...
	I0913 12:06:29.864257    4741 main.go:141] libmachine: Creating Disk image...
	I0913 12:06:29.864263    4741 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:06:29.864482    4741 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/disk.qcow2
	I0913 12:06:29.873724    4741 main.go:141] libmachine: STDOUT: 
	I0913 12:06:29.873741    4741 main.go:141] libmachine: STDERR: 
	I0913 12:06:29.873798    4741 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/disk.qcow2 +20000M
	I0913 12:06:29.881653    4741 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:06:29.881678    4741 main.go:141] libmachine: STDERR: 
	I0913 12:06:29.881693    4741 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/disk.qcow2
	I0913 12:06:29.881711    4741 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:06:29.881722    4741 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:06:29.881750    4741 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:48:75:85:44:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/disk.qcow2
	I0913 12:06:29.883402    4741 main.go:141] libmachine: STDOUT: 
	I0913 12:06:29.883423    4741 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:06:29.883443    4741 client.go:171] duration metric: took 270.678167ms to LocalClient.Create
	I0913 12:06:31.885535    4741 start.go:128] duration metric: took 2.298614083s to createHost
	I0913 12:06:31.885601    4741 start.go:83] releasing machines lock for "force-systemd-flag-254000", held for 2.298731s
	W0913 12:06:31.885694    4741 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:06:31.913815    4741 out.go:177] * Deleting "force-systemd-flag-254000" in qemu2 ...
	W0913 12:06:31.939260    4741 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:06:31.939277    4741 start.go:729] Will try again in 5 seconds ...
	I0913 12:06:36.941270    4741 start.go:360] acquireMachinesLock for force-systemd-flag-254000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:06:36.941708    4741 start.go:364] duration metric: took 197.459µs to acquireMachinesLock for "force-systemd-flag-254000"
	I0913 12:06:36.941770    4741 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-254000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-254000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:06:36.941930    4741 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:06:36.959570    4741 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 12:06:37.000179    4741 start.go:159] libmachine.API.Create for "force-systemd-flag-254000" (driver="qemu2")
	I0913 12:06:37.000248    4741 client.go:168] LocalClient.Create starting
	I0913 12:06:37.000360    4741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:06:37.000424    4741 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:37.000440    4741 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:37.000494    4741 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:06:37.000533    4741 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:37.000552    4741 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:37.001363    4741 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:06:37.173606    4741 main.go:141] libmachine: Creating SSH key...
	I0913 12:06:37.273672    4741 main.go:141] libmachine: Creating Disk image...
	I0913 12:06:37.273677    4741 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:06:37.273884    4741 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/disk.qcow2
	I0913 12:06:37.283223    4741 main.go:141] libmachine: STDOUT: 
	I0913 12:06:37.283239    4741 main.go:141] libmachine: STDERR: 
	I0913 12:06:37.283307    4741 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/disk.qcow2 +20000M
	I0913 12:06:37.291079    4741 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:06:37.291095    4741 main.go:141] libmachine: STDERR: 
	I0913 12:06:37.291108    4741 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/disk.qcow2
	I0913 12:06:37.291114    4741 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:06:37.291123    4741 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:06:37.291157    4741 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:42:31:f5:45:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-flag-254000/disk.qcow2
	I0913 12:06:37.292719    4741 main.go:141] libmachine: STDOUT: 
	I0913 12:06:37.292738    4741 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:06:37.292752    4741 client.go:171] duration metric: took 292.510208ms to LocalClient.Create
	I0913 12:06:39.294859    4741 start.go:128] duration metric: took 2.352999209s to createHost
	I0913 12:06:39.294912    4741 start.go:83] releasing machines lock for "force-systemd-flag-254000", held for 2.353276667s
	W0913 12:06:39.295240    4741 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-254000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-254000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:06:39.305752    4741 out.go:201] 
	W0913 12:06:39.313935    4741 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:06:39.313961    4741 out.go:270] * 
	* 
	W0913 12:06:39.316727    4741 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:06:39.327680    4741 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-254000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-254000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-254000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (74.86775ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-254000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-254000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-254000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-13 12:06:39.419598 -0700 PDT m=+2807.882378668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-254000 -n force-systemd-flag-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-254000 -n force-systemd-flag-254000: exit status 7 (33.055375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-254000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-254000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-254000
--- FAIL: TestForceSystemdFlag (10.07s)

                                                
                                    
x
+
TestForceSystemdEnv (10.61s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-174000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-174000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.418578708s)

                                                
                                                
-- stdout --
	* [force-systemd-env-174000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-174000" primary control-plane node in "force-systemd-env-174000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-174000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:06:23.940632    4709 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:06:23.940751    4709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:06:23.940754    4709 out.go:358] Setting ErrFile to fd 2...
	I0913 12:06:23.940757    4709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:06:23.940874    4709 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:06:23.941980    4709 out.go:352] Setting JSON to false
	I0913 12:06:23.958092    4709 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3946,"bootTime":1726250437,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:06:23.958159    4709 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:06:23.963348    4709 out.go:177] * [force-systemd-env-174000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:06:23.974332    4709 notify.go:220] Checking for updates...
	I0913 12:06:23.979353    4709 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:06:23.987260    4709 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:06:23.995317    4709 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:06:24.003339    4709 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:06:24.011293    4709 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:06:24.019147    4709 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0913 12:06:24.023637    4709 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:06:24.023675    4709 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:06:24.027323    4709 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:06:24.034342    4709 start.go:297] selected driver: qemu2
	I0913 12:06:24.034348    4709 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:06:24.034353    4709 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:06:24.036638    4709 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:06:24.040336    4709 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:06:24.044419    4709 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 12:06:24.044433    4709 cni.go:84] Creating CNI manager for ""
	I0913 12:06:24.044455    4709 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:06:24.044459    4709 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 12:06:24.044485    4709 start.go:340] cluster config:
	{Name:force-systemd-env-174000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-174000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:06:24.048187    4709 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:06:24.054335    4709 out.go:177] * Starting "force-systemd-env-174000" primary control-plane node in "force-systemd-env-174000" cluster
	I0913 12:06:24.058313    4709 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:06:24.058326    4709 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:06:24.058337    4709 cache.go:56] Caching tarball of preloaded images
	I0913 12:06:24.058401    4709 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:06:24.058406    4709 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:06:24.058472    4709 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/force-systemd-env-174000/config.json ...
	I0913 12:06:24.058483    4709 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/force-systemd-env-174000/config.json: {Name:mk65cf5380bafd855903a49cbe9f9feed848ef17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:06:24.058792    4709 start.go:360] acquireMachinesLock for force-systemd-env-174000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:06:24.058824    4709 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "force-systemd-env-174000"
	I0913 12:06:24.058834    4709 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-174000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:06:24.058862    4709 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:06:24.063217    4709 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 12:06:24.079697    4709 start.go:159] libmachine.API.Create for "force-systemd-env-174000" (driver="qemu2")
	I0913 12:06:24.079728    4709 client.go:168] LocalClient.Create starting
	I0913 12:06:24.079780    4709 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:06:24.079810    4709 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:24.079819    4709 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:24.079856    4709 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:06:24.079878    4709 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:24.079890    4709 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:24.080223    4709 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:06:24.241896    4709 main.go:141] libmachine: Creating SSH key...
	I0913 12:06:24.368306    4709 main.go:141] libmachine: Creating Disk image...
	I0913 12:06:24.368315    4709 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:06:24.368494    4709 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/disk.qcow2
	I0913 12:06:24.377226    4709 main.go:141] libmachine: STDOUT: 
	I0913 12:06:24.377248    4709 main.go:141] libmachine: STDERR: 
	I0913 12:06:24.377307    4709 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/disk.qcow2 +20000M
	I0913 12:06:24.385213    4709 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:06:24.385235    4709 main.go:141] libmachine: STDERR: 
	I0913 12:06:24.385259    4709 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/disk.qcow2
	I0913 12:06:24.385264    4709 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:06:24.385276    4709 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:06:24.385303    4709 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:64:ad:9c:45:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/disk.qcow2
	I0913 12:06:24.386787    4709 main.go:141] libmachine: STDOUT: 
	I0913 12:06:24.386801    4709 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:06:24.386822    4709 client.go:171] duration metric: took 307.10075ms to LocalClient.Create
	I0913 12:06:26.388960    4709 start.go:128] duration metric: took 2.330159208s to createHost
	I0913 12:06:26.389056    4709 start.go:83] releasing machines lock for "force-systemd-env-174000", held for 2.330313458s
	W0913 12:06:26.389152    4709 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:06:26.396209    4709 out.go:177] * Deleting "force-systemd-env-174000" in qemu2 ...
	W0913 12:06:26.427902    4709 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:06:26.427927    4709 start.go:729] Will try again in 5 seconds ...
	I0913 12:06:31.429895    4709 start.go:360] acquireMachinesLock for force-systemd-env-174000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:06:31.885825    4709 start.go:364] duration metric: took 455.778875ms to acquireMachinesLock for "force-systemd-env-174000"
	I0913 12:06:31.885971    4709 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-174000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-174000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:06:31.886224    4709 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:06:31.902749    4709 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 12:06:31.950562    4709 start.go:159] libmachine.API.Create for "force-systemd-env-174000" (driver="qemu2")
	I0913 12:06:31.950607    4709 client.go:168] LocalClient.Create starting
	I0913 12:06:31.950731    4709 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:06:31.950789    4709 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:31.950808    4709 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:31.950874    4709 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:06:31.950918    4709 main.go:141] libmachine: Decoding PEM data...
	I0913 12:06:31.950931    4709 main.go:141] libmachine: Parsing certificate...
	I0913 12:06:31.951462    4709 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:06:32.160420    4709 main.go:141] libmachine: Creating SSH key...
	I0913 12:06:32.260791    4709 main.go:141] libmachine: Creating Disk image...
	I0913 12:06:32.260796    4709 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:06:32.261001    4709 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/disk.qcow2
	I0913 12:06:32.270396    4709 main.go:141] libmachine: STDOUT: 
	I0913 12:06:32.270412    4709 main.go:141] libmachine: STDERR: 
	I0913 12:06:32.270462    4709 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/disk.qcow2 +20000M
	I0913 12:06:32.278189    4709 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:06:32.278210    4709 main.go:141] libmachine: STDERR: 
	I0913 12:06:32.278226    4709 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/disk.qcow2
	I0913 12:06:32.278231    4709 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:06:32.278238    4709 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:06:32.278266    4709 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:2b:55:f0:84:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/force-systemd-env-174000/disk.qcow2
	I0913 12:06:32.279857    4709 main.go:141] libmachine: STDOUT: 
	I0913 12:06:32.279875    4709 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:06:32.279897    4709 client.go:171] duration metric: took 329.298625ms to LocalClient.Create
	I0913 12:06:34.282028    4709 start.go:128] duration metric: took 2.395868041s to createHost
	I0913 12:06:34.282162    4709 start.go:83] releasing machines lock for "force-systemd-env-174000", held for 2.3962805s
	W0913 12:06:34.282505    4709 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-174000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-174000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:06:34.298045    4709 out.go:201] 
	W0913 12:06:34.302881    4709 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:06:34.302948    4709 out.go:270] * 
	* 
	W0913 12:06:34.305589    4709 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:06:34.314926    4709 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-174000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-174000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-174000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (73.167291ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-174000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-174000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-13 12:06:34.407576 -0700 PDT m=+2802.870157126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-174000 -n force-systemd-env-174000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-174000 -n force-systemd-env-174000: exit status 7 (32.646291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-174000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-174000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-174000
--- FAIL: TestForceSystemdEnv (10.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (34.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-033000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-033000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-lh9q8" [5eab0ffa-07f8-4558-ad6a-e17b0a984596] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-lh9q8" [5eab0ffa-07f8-4558-ad6a-e17b0a984596] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0045225s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31292
functional_test.go:1661: error fetching http://192.168.105.4:31292: Get "http://192.168.105.4:31292": dial tcp 192.168.105.4:31292: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31292: Get "http://192.168.105.4:31292": dial tcp 192.168.105.4:31292: connect: connection refused
E0913 11:39:17.531888    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:31292: Get "http://192.168.105.4:31292": dial tcp 192.168.105.4:31292: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31292: Get "http://192.168.105.4:31292": dial tcp 192.168.105.4:31292: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31292: Get "http://192.168.105.4:31292": dial tcp 192.168.105.4:31292: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31292: Get "http://192.168.105.4:31292": dial tcp 192.168.105.4:31292: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31292: Get "http://192.168.105.4:31292": dial tcp 192.168.105.4:31292: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31292: Get "http://192.168.105.4:31292": dial tcp 192.168.105.4:31292: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-033000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-lh9q8
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-033000/192.168.105.4
Start Time:       Fri, 13 Sep 2024 11:39:07 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://805baec60a687e7a6eca95d7b807494c543b1d932da63e9d208f11a5b8c5925f
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Fri, 13 Sep 2024 11:39:21 -0700
Finished:     Fri, 13 Sep 2024 11:39:21 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c49m9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-c49m9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  33s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-lh9q8 to functional-033000
Normal   Pulled     19s (x3 over 32s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    19s (x3 over 32s)  kubelet            Created container echoserver-arm
Normal   Started    19s (x3 over 32s)  kubelet            Started container echoserver-arm
Warning  BackOff    6s (x3 over 30s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-lh9q8_default(5eab0ffa-07f8-4558-ad6a-e17b0a984596)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-033000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-033000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.105.3.93
IPs:                      10.105.3.93
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31292/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-033000 -n functional-033000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-033000 ssh findmnt                                                                                        | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT | 13 Sep 24 11:39 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-033000 ssh -- ls                                                                                          | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT | 13 Sep 24 11:39 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-033000 ssh cat                                                                                            | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT | 13 Sep 24 11:39 PDT |
	|           | /mount-9p/test-1726252770019336000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-033000 ssh stat                                                                                           | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT | 13 Sep 24 11:39 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-033000 ssh stat                                                                                           | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT | 13 Sep 24 11:39 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-033000 ssh sudo                                                                                           | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT | 13 Sep 24 11:39 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-033000 ssh findmnt                                                                                        | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-033000                                                                                                 | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2906958340/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-033000 ssh findmnt                                                                                        | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT | 13 Sep 24 11:39 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-033000 ssh -- ls                                                                                          | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT | 13 Sep 24 11:39 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-033000 ssh sudo                                                                                           | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-033000                                                                                                 | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup75650668/001:/mount3     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-033000                                                                                                 | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup75650668/001:/mount2     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-033000                                                                                                 | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup75650668/001:/mount1     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-033000 ssh findmnt                                                                                        | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-033000 ssh findmnt                                                                                        | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-033000 ssh findmnt                                                                                        | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-033000 ssh findmnt                                                                                        | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT | 13 Sep 24 11:39 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-033000 ssh findmnt                                                                                        | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT | 13 Sep 24 11:39 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-033000 ssh findmnt                                                                                        | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT | 13 Sep 24 11:39 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-033000                                                                                                 | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-033000                                                                                                 | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-033000 --dry-run                                                                                       | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-033000                                                                                                 | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-033000 | jenkins | v1.34.0 | 13 Sep 24 11:39 PDT |                     |
	|           | -p functional-033000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 11:39:38
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 11:39:38.826522    3017 out.go:345] Setting OutFile to fd 1 ...
	I0913 11:39:38.826622    3017 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:39:38.826625    3017 out.go:358] Setting ErrFile to fd 2...
	I0913 11:39:38.826628    3017 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:39:38.826761    3017 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 11:39:38.828143    3017 out.go:352] Setting JSON to false
	I0913 11:39:38.845287    3017 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2342,"bootTime":1726250436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 11:39:38.845390    3017 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 11:39:38.849153    3017 out.go:177] * [functional-033000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0913 11:39:38.856187    3017 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 11:39:38.856284    3017 notify.go:220] Checking for updates...
	I0913 11:39:38.863168    3017 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 11:39:38.866118    3017 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 11:39:38.869125    3017 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 11:39:38.870545    3017 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 11:39:38.874119    3017 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 11:39:38.877434    3017 config.go:182] Loaded profile config "functional-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 11:39:38.877686    3017 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 11:39:38.881908    3017 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0913 11:39:38.889160    3017 start.go:297] selected driver: qemu2
	I0913 11:39:38.889169    3017 start.go:901] validating driver "qemu2" against &{Name:functional-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 11:39:38.889224    3017 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 11:39:38.895005    3017 out.go:201] 
	W0913 11:39:38.899077    3017 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0913 11:39:38.903107    3017 out.go:201] 
	
	
	==> Docker <==
	Sep 13 18:39:31 functional-033000 cri-dockerd[6011]: time="2024-09-13T18:39:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/17a2e9e4e83b3a613b2daa8536fa3e04fde0c27c161c8b464041c729e4e022b1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 13 18:39:33 functional-033000 cri-dockerd[6011]: time="2024-09-13T18:39:33Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 13 18:39:33 functional-033000 dockerd[5749]: time="2024-09-13T18:39:33.295417633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 13 18:39:33 functional-033000 dockerd[5749]: time="2024-09-13T18:39:33.295451221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 13 18:39:33 functional-033000 dockerd[5749]: time="2024-09-13T18:39:33.295461972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 13 18:39:33 functional-033000 dockerd[5749]: time="2024-09-13T18:39:33.295504395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 13 18:39:33 functional-033000 dockerd[5742]: time="2024-09-13T18:39:33.336812941Z" level=info msg="ignoring event" container=062df022f24f2b985e3fd8a6110937c0f8b0df1c2b135063d908108a12b38350 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:39:33 functional-033000 dockerd[5749]: time="2024-09-13T18:39:33.336880242Z" level=info msg="shim disconnected" id=062df022f24f2b985e3fd8a6110937c0f8b0df1c2b135063d908108a12b38350 namespace=moby
	Sep 13 18:39:33 functional-033000 dockerd[5749]: time="2024-09-13T18:39:33.336906871Z" level=warning msg="cleaning up after shim disconnected" id=062df022f24f2b985e3fd8a6110937c0f8b0df1c2b135063d908108a12b38350 namespace=moby
	Sep 13 18:39:33 functional-033000 dockerd[5749]: time="2024-09-13T18:39:33.336910871Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 18:39:34 functional-033000 dockerd[5742]: time="2024-09-13T18:39:34.707804191Z" level=info msg="ignoring event" container=17a2e9e4e83b3a613b2daa8536fa3e04fde0c27c161c8b464041c729e4e022b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 18:39:34 functional-033000 dockerd[5749]: time="2024-09-13T18:39:34.708000134Z" level=info msg="shim disconnected" id=17a2e9e4e83b3a613b2daa8536fa3e04fde0c27c161c8b464041c729e4e022b1 namespace=moby
	Sep 13 18:39:34 functional-033000 dockerd[5749]: time="2024-09-13T18:39:34.708034013Z" level=warning msg="cleaning up after shim disconnected" id=17a2e9e4e83b3a613b2daa8536fa3e04fde0c27c161c8b464041c729e4e022b1 namespace=moby
	Sep 13 18:39:34 functional-033000 dockerd[5749]: time="2024-09-13T18:39:34.708039055Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 18:39:39 functional-033000 dockerd[5749]: time="2024-09-13T18:39:39.899102659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 13 18:39:39 functional-033000 dockerd[5749]: time="2024-09-13T18:39:39.899146374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 13 18:39:39 functional-033000 dockerd[5749]: time="2024-09-13T18:39:39.899160125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 13 18:39:39 functional-033000 dockerd[5749]: time="2024-09-13T18:39:39.899204631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 13 18:39:39 functional-033000 dockerd[5749]: time="2024-09-13T18:39:39.948452670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 13 18:39:39 functional-033000 dockerd[5749]: time="2024-09-13T18:39:39.948591480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 13 18:39:39 functional-033000 dockerd[5749]: time="2024-09-13T18:39:39.948603481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 13 18:39:39 functional-033000 dockerd[5749]: time="2024-09-13T18:39:39.948633985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 13 18:39:39 functional-033000 cri-dockerd[6011]: time="2024-09-13T18:39:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d59e9a3534a48f2a00aad00b8f74885d9125b86e5fbdc89b1d9a9f9cd6607260/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 13 18:39:39 functional-033000 cri-dockerd[6011]: time="2024-09-13T18:39:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b1e8071866cfff39332d8b154008360a0a4dfaf908b83c6f136ae68ebcb1a720/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 13 18:39:40 functional-033000 dockerd[5742]: time="2024-09-13T18:39:40.204015922Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	062df022f24f2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   8 seconds ago        Exited              mount-munger              0                   17a2e9e4e83b3       busybox-mount
	62ef87e3ca6c4       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                         18 seconds ago       Running             myfrontend                0                   afb486ff1d02b       sp-pod
	805baec60a687       72565bf5bbedf                                                                                         20 seconds ago       Exited              echoserver-arm            2                   114ffeda3765c       hello-node-connect-65d86f57f4-lh9q8
	01a065ac0b124       72565bf5bbedf                                                                                         26 seconds ago       Exited              echoserver-arm            2                   62c12534ee3c0       hello-node-64b4f8f9ff-h4tdp
	752e7364acf25       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                         40 seconds ago       Running             nginx                     0                   b765ce8ec313b       nginx-svc
	7e836c0a2dc70       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   ff109fea46df5       coredns-7c65d6cfc9-rrrmn
	dc30eb0d9977e       24a140c548c07                                                                                         About a minute ago   Running             kube-proxy                2                   6f551325d7ca1       kube-proxy-7mxtd
	522b169408a6c       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   44ca02977ff76       storage-provisioner
	8eb07ef4f0776       279f381cb3736                                                                                         About a minute ago   Running             kube-controller-manager   2                   5131ab77ef84a       kube-controller-manager-functional-033000
	049e516127791       7f8aa378bb47d                                                                                         About a minute ago   Running             kube-scheduler            2                   58cbc4fd638d2       kube-scheduler-functional-033000
	c6a79d4b93d77       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   52e2833ea0cd3       etcd-functional-033000
	0649af3994aea       d3f53a98c0a9d                                                                                         About a minute ago   Running             kube-apiserver            0                   d8bcd3b8db518       kube-apiserver-functional-033000
	d8a62c6b643d9       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   7045db21dfb21       storage-provisioner
	7e53989cb73ee       2f6c962e7b831                                                                                         2 minutes ago        Exited              coredns                   1                   3ca44aa2533d1       coredns-7c65d6cfc9-rrrmn
	a12c18e926ecd       24a140c548c07                                                                                         2 minutes ago        Exited              kube-proxy                1                   7eecde180365c       kube-proxy-7mxtd
	cdda90482c59a       7f8aa378bb47d                                                                                         2 minutes ago        Exited              kube-scheduler            1                   a2c5b1d81e5bd       kube-scheduler-functional-033000
	83eec4196c2f5       27e3830e14027                                                                                         2 minutes ago        Exited              etcd                      1                   e2e132718d21c       etcd-functional-033000
	0cb36d4ec2056       279f381cb3736                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   09429b3047bb9       kube-controller-manager-functional-033000
	
	
	==> coredns [7e53989cb73e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52615 - 57324 "HINFO IN 2327473600925176682.4686278307819312529. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.119547653s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7e836c0a2dc7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44295 - 28347 "HINFO IN 5635141909416308314.5296774370884715572. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023395613s
	[INFO] 10.244.0.1:25668 - 37170 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000098389s
	[INFO] 10.244.0.1:24959 - 41689 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000096681s
	[INFO] 10.244.0.1:31763 - 65378 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.00003538s
	[INFO] 10.244.0.1:22172 - 27524 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001271059s
	[INFO] 10.244.0.1:55133 - 56063 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000059258s
	[INFO] 10.244.0.1:4996 - 5262 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000117475s
	
	
	==> describe nodes <==
	Name:               functional-033000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-033000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=functional-033000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T11_37_10_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 18:37:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-033000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 18:39:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 18:39:29 +0000   Fri, 13 Sep 2024 18:37:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 18:39:29 +0000   Fri, 13 Sep 2024 18:37:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 18:39:29 +0000   Fri, 13 Sep 2024 18:37:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 18:39:29 +0000   Fri, 13 Sep 2024 18:37:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-033000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	System Info:
	  Machine ID:                 24d1077b76ee4c6795e02d0026edfb03
	  System UUID:                24d1077b76ee4c6795e02d0026edfb03
	  Boot ID:                    1c074fa3-185b-4f48-8902-8aa874e0a6ca
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-h4tdp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  default                     hello-node-connect-65d86f57f4-lh9q8          0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 coredns-7c65d6cfc9-rrrmn                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m27s
	  kube-system                 etcd-functional-033000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m33s
	  kube-system                 kube-apiserver-functional-033000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-controller-manager-functional-033000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-proxy-7mxtd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-functional-033000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-4qksn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-hxzdk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m26s                kube-proxy       
	  Normal  Starting                 72s                  kube-proxy       
	  Normal  Starting                 119s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m32s                kubelet          Node functional-033000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m32s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m32s                kubelet          Node functional-033000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m32s                kubelet          Node functional-033000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m32s                kubelet          Starting kubelet.
	  Normal  NodeReady                2m28s                kubelet          Node functional-033000 status is now: NodeReady
	  Normal  RegisteredNode           2m27s                node-controller  Node functional-033000 event: Registered Node functional-033000 in Controller
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node functional-033000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node functional-033000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m2s (x7 over 2m2s)  kubelet          Node functional-033000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           117s                 node-controller  Node functional-033000 event: Registered Node functional-033000 in Controller
	  Normal  Starting                 76s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s (x8 over 76s)    kubelet          Node functional-033000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s (x8 over 76s)    kubelet          Node functional-033000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s (x7 over 76s)    kubelet          Node functional-033000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           70s                  node-controller  Node functional-033000 event: Registered Node functional-033000 in Controller
	
	
	==> dmesg <==
	[  +2.410517] kauditd_printk_skb: 199 callbacks suppressed
	[  +5.305582] kauditd_printk_skb: 36 callbacks suppressed
	[ +11.300034] systemd-fstab-generator[4819]: Ignoring "noauto" option for root device
	[Sep13 18:38] systemd-fstab-generator[5256]: Ignoring "noauto" option for root device
	[  +0.051393] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.114346] systemd-fstab-generator[5289]: Ignoring "noauto" option for root device
	[  +0.124103] systemd-fstab-generator[5301]: Ignoring "noauto" option for root device
	[  +0.111809] systemd-fstab-generator[5315]: Ignoring "noauto" option for root device
	[  +5.140862] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.347633] systemd-fstab-generator[5964]: Ignoring "noauto" option for root device
	[  +0.085217] systemd-fstab-generator[5976]: Ignoring "noauto" option for root device
	[  +0.084680] systemd-fstab-generator[5988]: Ignoring "noauto" option for root device
	[  +0.096430] systemd-fstab-generator[6003]: Ignoring "noauto" option for root device
	[  +0.232215] systemd-fstab-generator[6168]: Ignoring "noauto" option for root device
	[  +0.965159] systemd-fstab-generator[6291]: Ignoring "noauto" option for root device
	[  +3.414046] kauditd_printk_skb: 199 callbacks suppressed
	[ +14.638461] systemd-fstab-generator[7318]: Ignoring "noauto" option for root device
	[  +0.058395] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.374991] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.547539] kauditd_printk_skb: 19 callbacks suppressed
	[Sep13 18:39] kauditd_printk_skb: 27 callbacks suppressed
	[ +12.832755] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.109369] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.379306] kauditd_printk_skb: 21 callbacks suppressed
	[  +7.740797] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [83eec4196c2f] <==
	{"level":"info","ts":"2024-09-13T18:37:40.292570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-13T18:37:40.292594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-13T18:37:40.292611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-13T18:37:40.292644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-13T18:37:40.292664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-13T18:37:40.304463Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-033000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T18:37:40.304498Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T18:37:40.304693Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T18:37:40.305102Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T18:37:40.305559Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-13T18:37:40.306072Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T18:37:40.316188Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T18:37:40.336386Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T18:37:40.336451Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T18:38:11.593961Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-13T18:38:11.593994Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-033000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-13T18:38:11.594024Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T18:38:11.594036Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T18:38:11.594059Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T18:38:11.594097Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/09/13 18:38:11 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-09-13T18:38:11.608474Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-13T18:38:11.615427Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-13T18:38:11.615486Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-13T18:38:11.615491Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-033000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [c6a79d4b93d7] <==
	{"level":"info","ts":"2024-09-13T18:38:26.433667Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-09-13T18:38:26.436354Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:38:26.436383Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T18:38:26.437463Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T18:38:26.440930Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-13T18:38:26.444368Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-13T18:38:26.444847Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-13T18:38:26.445261Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-13T18:38:26.445397Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-13T18:38:27.781629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-13T18:38:27.781775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-13T18:38:27.781840Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-13T18:38:27.781946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-13T18:38:27.782086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-13T18:38:27.782204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-13T18:38:27.782334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-13T18:38:27.787327Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-033000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T18:38:27.787470Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T18:38:27.788028Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T18:38:27.788084Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T18:38:27.788252Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T18:38:27.789676Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T18:38:27.789680Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T18:38:27.791747Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-13T18:38:27.792959Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:39:41 up 2 min,  0 users,  load average: 0.70, 0.50, 0.20
	Linux functional-033000 5.10.207 #1 SMP PREEMPT Thu Sep 12 17:20:51 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0649af3994ae] <==
	I0913 18:38:28.398443       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0913 18:38:28.398471       1 aggregator.go:171] initial CRD sync complete...
	I0913 18:38:28.398484       1 autoregister_controller.go:144] Starting autoregister controller
	I0913 18:38:28.398492       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0913 18:38:28.398499       1 cache.go:39] Caches are synced for autoregister controller
	I0913 18:38:28.418836       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0913 18:38:28.422002       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0913 18:38:28.422014       1 policy_source.go:224] refreshing policies
	I0913 18:38:28.439828       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0913 18:38:29.296887       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0913 18:38:29.817706       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0913 18:38:29.822080       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0913 18:38:29.834403       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0913 18:38:29.841226       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0913 18:38:29.844947       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0913 18:38:31.671335       1 controller.go:615] quota admission added evaluator for: endpoints
	I0913 18:38:31.920740       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0913 18:38:47.916617       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.90.124"}
	I0913 18:38:53.471308       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0913 18:38:53.514432       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.136.99"}
	I0913 18:38:57.496522       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.109.215"}
	I0913 18:39:07.936486       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.3.93"}
	I0913 18:39:39.525156       1 controller.go:615] quota admission added evaluator for: namespaces
	I0913 18:39:39.619118       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.75.176"}
	I0913 18:39:39.626378       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.40.104"}
	
	
	==> kube-controller-manager [0cb36d4ec205] <==
	I0913 18:37:44.304873       1 shared_informer.go:320] Caches are synced for HPA
	I0913 18:37:44.304888       1 shared_informer.go:320] Caches are synced for disruption
	I0913 18:37:44.305671       1 shared_informer.go:320] Caches are synced for endpoint
	I0913 18:37:44.305715       1 shared_informer.go:320] Caches are synced for TTL
	I0913 18:37:44.305676       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0913 18:37:44.305684       1 shared_informer.go:320] Caches are synced for service account
	I0913 18:37:44.305747       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0913 18:37:44.307861       1 shared_informer.go:320] Caches are synced for node
	I0913 18:37:44.307975       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0913 18:37:44.308003       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0913 18:37:44.308018       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0913 18:37:44.308051       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0913 18:37:44.308170       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-033000"
	I0913 18:37:44.309738       1 shared_informer.go:320] Caches are synced for daemon sets
	I0913 18:37:44.495775       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 18:37:44.505077       1 shared_informer.go:320] Caches are synced for PV protection
	I0913 18:37:44.505792       1 shared_informer.go:320] Caches are synced for persistent volume
	I0913 18:37:44.505999       1 shared_informer.go:320] Caches are synced for cronjob
	I0913 18:37:44.507498       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 18:37:44.522999       1 shared_informer.go:320] Caches are synced for attach detach
	I0913 18:37:44.936709       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 18:37:45.008186       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 18:37:45.008270       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0913 18:37:46.911646       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="19.793686ms"
	I0913 18:37:46.912986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="33.979µs"
	
	
	==> kube-controller-manager [8eb07ef4f077] <==
	I0913 18:39:07.917996       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="19.211µs"
	I0913 18:39:09.210632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="38.922µs"
	I0913 18:39:10.215394       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="22.628µs"
	I0913 18:39:16.303904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="29.087µs"
	I0913 18:39:21.584987       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="24.628µs"
	I0913 18:39:22.436198       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="31.129µs"
	I0913 18:39:28.596794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="47.84µs"
	I0913 18:39:29.438374       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-033000"
	I0913 18:39:34.618035       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="77.51µs"
	I0913 18:39:39.556173       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="11.480473ms"
	E0913 18:39:39.556606       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0913 18:39:39.559176       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.292642ms"
	E0913 18:39:39.559194       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0913 18:39:39.560357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="2.719358ms"
	E0913 18:39:39.560370       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0913 18:39:39.562543       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="2.231377ms"
	E0913 18:39:39.562618       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0913 18:39:39.565233       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="2.411818ms"
	E0913 18:39:39.565249       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0913 18:39:39.578253       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="12.973503ms"
	I0913 18:39:39.589083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.805342ms"
	I0913 18:39:39.589577       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="17.835µs"
	I0913 18:39:39.608829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="21.456164ms"
	I0913 18:39:39.613963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.110424ms"
	I0913 18:39:39.614046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="39.005µs"
	
	
	==> kube-proxy [a12c18e926ec] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 18:37:41.808828       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 18:37:41.815303       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0913 18:37:41.815333       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:37:41.830428       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 18:37:41.830446       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 18:37:41.830459       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:37:41.831172       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:37:41.831267       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:37:41.831272       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:37:41.831928       1 config.go:199] "Starting service config controller"
	I0913 18:37:41.832124       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:37:41.832028       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:37:41.832138       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:37:41.832211       1 config.go:328] "Starting node config controller"
	I0913 18:37:41.832218       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 18:37:41.933188       1 shared_informer.go:320] Caches are synced for node config
	I0913 18:37:41.933323       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:37:41.933374       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [dc30eb0d9977] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 18:38:29.113403       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 18:38:29.120916       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0913 18:38:29.120950       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 18:38:29.128535       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 18:38:29.128551       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 18:38:29.128562       1 server_linux.go:169] "Using iptables Proxier"
	I0913 18:38:29.129148       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 18:38:29.129245       1 server.go:483] "Version info" version="v1.31.1"
	I0913 18:38:29.129254       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:38:29.130220       1 config.go:199] "Starting service config controller"
	I0913 18:38:29.130232       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 18:38:29.130253       1 config.go:105] "Starting endpoint slice config controller"
	I0913 18:38:29.130256       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 18:38:29.130514       1 config.go:328] "Starting node config controller"
	I0913 18:38:29.130523       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 18:38:29.230286       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 18:38:29.230294       1 shared_informer.go:320] Caches are synced for service config
	I0913 18:38:29.230720       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [049e51612779] <==
	I0913 18:38:26.824059       1 serving.go:386] Generated self-signed cert in-memory
	W0913 18:38:28.317377       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 18:38:28.317507       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 18:38:28.317534       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 18:38:28.317552       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 18:38:28.341819       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 18:38:28.341834       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:38:28.350498       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 18:38:28.351068       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 18:38:28.351084       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 18:38:28.351158       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 18:38:28.452015       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cdda90482c59] <==
	I0913 18:37:40.410213       1 serving.go:386] Generated self-signed cert in-memory
	W0913 18:37:40.946245       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 18:37:40.946261       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 18:37:40.946265       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 18:37:40.946269       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 18:37:40.989608       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 18:37:40.989623       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 18:37:40.990873       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 18:37:40.990919       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 18:37:40.990948       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 18:37:40.990991       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 18:37:41.092083       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 18:38:11.593792       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0913 18:38:11.593853       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0913 18:38:11.593929       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 13 18:39:25 functional-033000 kubelet[6298]: E0913 18:39:25.588126    6298 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 13 18:39:25 functional-033000 kubelet[6298]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 13 18:39:25 functional-033000 kubelet[6298]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 18:39:25 functional-033000 kubelet[6298]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 18:39:25 functional-033000 kubelet[6298]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 18:39:25 functional-033000 kubelet[6298]: I0913 18:39:25.655282    6298 scope.go:117] "RemoveContainer" containerID="34a6b34ded6d21591b08ffcb1ac2bd5ae631856e58005796baaeb0b1b56fdf47"
	Sep 13 18:39:28 functional-033000 kubelet[6298]: I0913 18:39:28.580238    6298 scope.go:117] "RemoveContainer" containerID="01a065ac0b124342d624c4f5a4bb2543ee7a4c5c66078e67bd667ebc21c3304e"
	Sep 13 18:39:28 functional-033000 kubelet[6298]: E0913 18:39:28.580621    6298 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-h4tdp_default(5c18d1ac-804d-47f3-b5f9-b484b232ed75)\"" pod="default/hello-node-64b4f8f9ff-h4tdp" podUID="5c18d1ac-804d-47f3-b5f9-b484b232ed75"
	Sep 13 18:39:31 functional-033000 kubelet[6298]: I0913 18:39:31.619905    6298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/1a208572-4875-4b1d-a54f-0f065a65fb08-test-volume\") pod \"busybox-mount\" (UID: \"1a208572-4875-4b1d-a54f-0f065a65fb08\") " pod="default/busybox-mount"
	Sep 13 18:39:31 functional-033000 kubelet[6298]: I0913 18:39:31.620104    6298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99d29\" (UniqueName: \"kubernetes.io/projected/1a208572-4875-4b1d-a54f-0f065a65fb08-kube-api-access-99d29\") pod \"busybox-mount\" (UID: \"1a208572-4875-4b1d-a54f-0f065a65fb08\") " pod="default/busybox-mount"
	Sep 13 18:39:34 functional-033000 kubelet[6298]: I0913 18:39:34.582739    6298 scope.go:117] "RemoveContainer" containerID="805baec60a687e7a6eca95d7b807494c543b1d932da63e9d208f11a5b8c5925f"
	Sep 13 18:39:34 functional-033000 kubelet[6298]: E0913 18:39:34.583170    6298 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-lh9q8_default(5eab0ffa-07f8-4558-ad6a-e17b0a984596)\"" pod="default/hello-node-connect-65d86f57f4-lh9q8" podUID="5eab0ffa-07f8-4558-ad6a-e17b0a984596"
	Sep 13 18:39:34 functional-033000 kubelet[6298]: I0913 18:39:34.740628    6298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/1a208572-4875-4b1d-a54f-0f065a65fb08-test-volume\") pod \"1a208572-4875-4b1d-a54f-0f065a65fb08\" (UID: \"1a208572-4875-4b1d-a54f-0f065a65fb08\") "
	Sep 13 18:39:34 functional-033000 kubelet[6298]: I0913 18:39:34.740655    6298 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99d29\" (UniqueName: \"kubernetes.io/projected/1a208572-4875-4b1d-a54f-0f065a65fb08-kube-api-access-99d29\") pod \"1a208572-4875-4b1d-a54f-0f065a65fb08\" (UID: \"1a208572-4875-4b1d-a54f-0f065a65fb08\") "
	Sep 13 18:39:34 functional-033000 kubelet[6298]: I0913 18:39:34.740738    6298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a208572-4875-4b1d-a54f-0f065a65fb08-test-volume" (OuterVolumeSpecName: "test-volume") pod "1a208572-4875-4b1d-a54f-0f065a65fb08" (UID: "1a208572-4875-4b1d-a54f-0f065a65fb08"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 13 18:39:34 functional-033000 kubelet[6298]: I0913 18:39:34.743414    6298 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a208572-4875-4b1d-a54f-0f065a65fb08-kube-api-access-99d29" (OuterVolumeSpecName: "kube-api-access-99d29") pod "1a208572-4875-4b1d-a54f-0f065a65fb08" (UID: "1a208572-4875-4b1d-a54f-0f065a65fb08"). InnerVolumeSpecName "kube-api-access-99d29". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 18:39:34 functional-033000 kubelet[6298]: I0913 18:39:34.841112    6298 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/1a208572-4875-4b1d-a54f-0f065a65fb08-test-volume\") on node \"functional-033000\" DevicePath \"\""
	Sep 13 18:39:34 functional-033000 kubelet[6298]: I0913 18:39:34.841127    6298 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-99d29\" (UniqueName: \"kubernetes.io/projected/1a208572-4875-4b1d-a54f-0f065a65fb08-kube-api-access-99d29\") on node \"functional-033000\" DevicePath \"\""
	Sep 13 18:39:35 functional-033000 kubelet[6298]: I0913 18:39:35.628776    6298 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17a2e9e4e83b3a613b2daa8536fa3e04fde0c27c161c8b464041c729e4e022b1"
	Sep 13 18:39:39 functional-033000 kubelet[6298]: E0913 18:39:39.573789    6298 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a208572-4875-4b1d-a54f-0f065a65fb08" containerName="mount-munger"
	Sep 13 18:39:39 functional-033000 kubelet[6298]: I0913 18:39:39.573832    6298 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a208572-4875-4b1d-a54f-0f065a65fb08" containerName="mount-munger"
	Sep 13 18:39:39 functional-033000 kubelet[6298]: I0913 18:39:39.683419    6298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db7s7\" (UniqueName: \"kubernetes.io/projected/d32fc189-b227-4b1d-834e-923d7a87bf33-kube-api-access-db7s7\") pod \"kubernetes-dashboard-695b96c756-hxzdk\" (UID: \"d32fc189-b227-4b1d-834e-923d7a87bf33\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-hxzdk"
	Sep 13 18:39:39 functional-033000 kubelet[6298]: I0913 18:39:39.683455    6298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d32fc189-b227-4b1d-834e-923d7a87bf33-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-hxzdk\" (UID: \"d32fc189-b227-4b1d-834e-923d7a87bf33\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-hxzdk"
	Sep 13 18:39:39 functional-033000 kubelet[6298]: I0913 18:39:39.784366    6298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzh7f\" (UniqueName: \"kubernetes.io/projected/fa7c681c-df80-44ca-8761-eb7b0048820a-kube-api-access-wzh7f\") pod \"dashboard-metrics-scraper-c5db448b4-4qksn\" (UID: \"fa7c681c-df80-44ca-8761-eb7b0048820a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-4qksn"
	Sep 13 18:39:39 functional-033000 kubelet[6298]: I0913 18:39:39.784432    6298 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fa7c681c-df80-44ca-8761-eb7b0048820a-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-4qksn\" (UID: \"fa7c681c-df80-44ca-8761-eb7b0048820a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-4qksn"
	
	
	==> storage-provisioner [522b169408a6] <==
	I0913 18:38:29.058494       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 18:38:29.067271       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 18:38:29.067295       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 18:38:46.468564       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 18:38:46.468731       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-033000_95b11a15-a7da-491f-9c13-5c06d05a6921!
	I0913 18:38:46.469135       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"367fe0ed-01cc-4dce-aa3b-0956a4b39bb0", APIVersion:"v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-033000_95b11a15-a7da-491f-9c13-5c06d05a6921 became leader
	I0913 18:38:46.569324       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-033000_95b11a15-a7da-491f-9c13-5c06d05a6921!
	I0913 18:39:10.194321       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0913 18:39:10.194362       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    e05c30d4-5162-4996-9b0d-aa1c8a089ea9 317 0 2024-09-13 18:37:14 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-13 18:37:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-790d6ed3-3032-4b9f-ab76-52bbf07a2879 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  790d6ed3-3032-4b9f-ab76-52bbf07a2879 725 0 2024-09-13 18:39:10 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-13 18:39:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-13 18:39:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0913 18:39:10.194852       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"790d6ed3-3032-4b9f-ab76-52bbf07a2879", APIVersion:"v1", ResourceVersion:"725", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0913 18:39:10.194990       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-790d6ed3-3032-4b9f-ab76-52bbf07a2879" provisioned
	I0913 18:39:10.195001       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0913 18:39:10.195004       1 volume_store.go:212] Trying to save persistentvolume "pvc-790d6ed3-3032-4b9f-ab76-52bbf07a2879"
	I0913 18:39:10.206102       1 volume_store.go:219] persistentvolume "pvc-790d6ed3-3032-4b9f-ab76-52bbf07a2879" saved
	I0913 18:39:10.209897       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"790d6ed3-3032-4b9f-ab76-52bbf07a2879", APIVersion:"v1", ResourceVersion:"725", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-790d6ed3-3032-4b9f-ab76-52bbf07a2879
	
	
	==> storage-provisioner [d8a62c6b643d] <==
	I0913 18:37:54.376292       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 18:37:54.380043       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 18:37:54.380063       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 18:37:54.382887       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 18:37:54.382975       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-033000_05a2010b-11c3-49f6-b7c3-1842a242f977!
	I0913 18:37:54.383022       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"367fe0ed-01cc-4dce-aa3b-0956a4b39bb0", APIVersion:"v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-033000_05a2010b-11c3-49f6-b7c3-1842a242f977 became leader
	I0913 18:37:54.483710       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-033000_05a2010b-11c3-49f6-b7c3-1842a242f977!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-033000 -n functional-033000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-033000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-c5db448b4-4qksn kubernetes-dashboard-695b96c756-hxzdk
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-033000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-4qksn kubernetes-dashboard-695b96c756-hxzdk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-033000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-4qksn kubernetes-dashboard-695b96c756-hxzdk: exit status 1 (41.623417ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-033000/192.168.105.4
	Start Time:       Fri, 13 Sep 2024 11:39:31 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://062df022f24f2b985e3fd8a6110937c0f8b0df1c2b135063d908108a12b38350
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 13 Sep 2024 11:39:33 -0700
	      Finished:     Fri, 13 Sep 2024 11:39:33 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-99d29 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-99d29:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10s   default-scheduler  Successfully assigned default/busybox-mount to functional-033000
	  Normal  Pulling    10s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     8s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.343s (1.343s including waiting). Image size: 3547125 bytes.
	  Normal  Created    8s    kubelet            Created container mount-munger
	  Normal  Started    8s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-c5db448b4-4qksn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-hxzdk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-033000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-4qksn kubernetes-dashboard-695b96c756-hxzdk: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (34.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 node stop m02 -v=7 --alsologtostderr
E0913 11:43:53.307791    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:43:53.314373    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:43:53.325941    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:43:53.349299    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:43:53.392646    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:43:53.476010    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:43:53.637469    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:43:53.960846    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:43:54.604275    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:43:55.887686    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:43:57.002568    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:43:58.449348    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-988000 node stop m02 -v=7 --alsologtostderr: (12.19657175s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr
E0913 11:44:03.572977    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:44:13.816046    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:44:24.729878    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:44:34.298452    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:45:15.259689    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:46:37.178781    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr: exit status 7 (2m55.9718835s)

                                                
                                                
-- stdout --
	ha-988000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-988000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-988000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-988000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 11:44:01.394480    3342 out.go:345] Setting OutFile to fd 1 ...
	I0913 11:44:01.394659    3342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:44:01.394662    3342 out.go:358] Setting ErrFile to fd 2...
	I0913 11:44:01.394665    3342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:44:01.394796    3342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 11:44:01.394921    3342 out.go:352] Setting JSON to false
	I0913 11:44:01.394940    3342 mustload.go:65] Loading cluster: ha-988000
	I0913 11:44:01.394976    3342 notify.go:220] Checking for updates...
	I0913 11:44:01.395170    3342 config.go:182] Loaded profile config "ha-988000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 11:44:01.395178    3342 status.go:255] checking status of ha-988000 ...
	I0913 11:44:01.395938    3342 status.go:330] ha-988000 host status = "Running" (err=<nil>)
	I0913 11:44:01.395954    3342 host.go:66] Checking if "ha-988000" exists ...
	I0913 11:44:01.396080    3342 host.go:66] Checking if "ha-988000" exists ...
	I0913 11:44:01.396197    3342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 11:44:01.396205    3342 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/id_rsa Username:docker}
	W0913 11:44:27.315797    3342 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0913 11:44:27.315870    3342 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0913 11:44:27.315881    3342 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0913 11:44:27.315886    3342 status.go:257] ha-988000 status: &{Name:ha-988000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0913 11:44:27.315899    3342 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0913 11:44:27.315903    3342 status.go:255] checking status of ha-988000-m02 ...
	I0913 11:44:27.316113    3342 status.go:330] ha-988000-m02 host status = "Stopped" (err=<nil>)
	I0913 11:44:27.316120    3342 status.go:343] host is not running, skipping remaining checks
	I0913 11:44:27.316122    3342 status.go:257] ha-988000-m02 status: &{Name:ha-988000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 11:44:27.316126    3342 status.go:255] checking status of ha-988000-m03 ...
	I0913 11:44:27.317182    3342 status.go:330] ha-988000-m03 host status = "Running" (err=<nil>)
	I0913 11:44:27.317197    3342 host.go:66] Checking if "ha-988000-m03" exists ...
	I0913 11:44:27.317459    3342 host.go:66] Checking if "ha-988000-m03" exists ...
	I0913 11:44:27.317647    3342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 11:44:27.317657    3342 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m03/id_rsa Username:docker}
	W0913 11:45:42.315973    3342 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0913 11:45:42.316016    3342 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0913 11:45:42.316024    3342 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0913 11:45:42.316040    3342 status.go:257] ha-988000-m03 status: &{Name:ha-988000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0913 11:45:42.316047    3342 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0913 11:45:42.316052    3342 status.go:255] checking status of ha-988000-m04 ...
	I0913 11:45:42.316867    3342 status.go:330] ha-988000-m04 host status = "Running" (err=<nil>)
	I0913 11:45:42.316876    3342 host.go:66] Checking if "ha-988000-m04" exists ...
	I0913 11:45:42.316987    3342 host.go:66] Checking if "ha-988000-m04" exists ...
	I0913 11:45:42.317123    3342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 11:45:42.317134    3342 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m04/id_rsa Username:docker}
	W0913 11:46:57.315749    3342 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0913 11:46:57.315795    3342 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0913 11:46:57.315803    3342 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0913 11:46:57.315807    3342 status.go:257] ha-988000-m04 status: &{Name:ha-988000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0913 11:46:57.315816    3342 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr": ha-988000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-988000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-988000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-988000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr": ha-988000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-988000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-988000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-988000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr": ha-988000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-988000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-988000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-988000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000: exit status 3 (25.965467917s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 11:47:23.344725    3394 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0913 11:47:23.344736    3394 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-988000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.632073791s)
ha_test.go:413: expected profile "ha-988000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-988000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-988000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-988000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":
false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\
"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000
E0913 11:48:53.357813    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:48:57.053700    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000: exit status 3 (25.95712075s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 11:49:06.930150    3424 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0913 11:49:06.930157    3424 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-988000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (208.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-988000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.083231166s)

                                                
                                                
-- stdout --
	* Starting "ha-988000-m02" control-plane node in "ha-988000" cluster
	* Restarting existing qemu2 VM for "ha-988000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-988000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 11:49:06.963635    3436 out.go:345] Setting OutFile to fd 1 ...
	I0913 11:49:06.963891    3436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:49:06.963896    3436 out.go:358] Setting ErrFile to fd 2...
	I0913 11:49:06.963898    3436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:49:06.964029    3436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 11:49:06.964297    3436 mustload.go:65] Loading cluster: ha-988000
	I0913 11:49:06.964541    3436 config.go:182] Loaded profile config "ha-988000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0913 11:49:06.964778    3436 host.go:58] "ha-988000-m02" host status: Stopped
	I0913 11:49:06.969208    3436 out.go:177] * Starting "ha-988000-m02" control-plane node in "ha-988000" cluster
	I0913 11:49:06.972365    3436 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 11:49:06.972376    3436 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 11:49:06.972386    3436 cache.go:56] Caching tarball of preloaded images
	I0913 11:49:06.972456    3436 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 11:49:06.972461    3436 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 11:49:06.972518    3436 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/ha-988000/config.json ...
	I0913 11:49:06.973301    3436 start.go:360] acquireMachinesLock for ha-988000-m02: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 11:49:06.973363    3436 start.go:364] duration metric: took 30.084µs to acquireMachinesLock for "ha-988000-m02"
	I0913 11:49:06.973372    3436 start.go:96] Skipping create...Using existing machine configuration
	I0913 11:49:06.973376    3436 fix.go:54] fixHost starting: m02
	I0913 11:49:06.973482    3436 fix.go:112] recreateIfNeeded on ha-988000-m02: state=Stopped err=<nil>
	W0913 11:49:06.973488    3436 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 11:49:06.977288    3436 out.go:177] * Restarting existing qemu2 VM for "ha-988000-m02" ...
	I0913 11:49:06.981301    3436 qemu.go:418] Using hvf for hardware acceleration
	I0913 11:49:06.981339    3436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a2:e7:d9:16:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/disk.qcow2
	I0913 11:49:06.983968    3436 main.go:141] libmachine: STDOUT: 
	I0913 11:49:06.983985    3436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 11:49:06.984010    3436 fix.go:56] duration metric: took 10.632208ms for fixHost
	I0913 11:49:06.984017    3436 start.go:83] releasing machines lock for "ha-988000-m02", held for 10.646792ms
	W0913 11:49:06.984022    3436 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 11:49:06.984055    3436 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 11:49:06.984059    3436 start.go:729] Will try again in 5 seconds ...
	I0913 11:49:11.985920    3436 start.go:360] acquireMachinesLock for ha-988000-m02: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 11:49:11.986034    3436 start.go:364] duration metric: took 96.833µs to acquireMachinesLock for "ha-988000-m02"
	I0913 11:49:11.986068    3436 start.go:96] Skipping create...Using existing machine configuration
	I0913 11:49:11.986072    3436 fix.go:54] fixHost starting: m02
	I0913 11:49:11.986230    3436 fix.go:112] recreateIfNeeded on ha-988000-m02: state=Stopped err=<nil>
	W0913 11:49:11.986237    3436 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 11:49:11.990638    3436 out.go:177] * Restarting existing qemu2 VM for "ha-988000-m02" ...
	I0913 11:49:11.994699    3436 qemu.go:418] Using hvf for hardware acceleration
	I0913 11:49:11.994790    3436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a2:e7:d9:16:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/disk.qcow2
	I0913 11:49:11.996916    3436 main.go:141] libmachine: STDOUT: 
	I0913 11:49:11.996934    3436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 11:49:11.996955    3436 fix.go:56] duration metric: took 10.883208ms for fixHost
	I0913 11:49:11.996958    3436 start.go:83] releasing machines lock for "ha-988000-m02", held for 10.915625ms
	W0913 11:49:11.997003    3436 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-988000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-988000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 11:49:11.999676    3436 out.go:201] 
	W0913 11:49:12.003655    3436 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 11:49:12.003661    3436 out.go:270] * 
	* 
	W0913 11:49:12.005466    3436 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 11:49:12.009722    3436 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0913 11:49:06.963635    3436 out.go:345] Setting OutFile to fd 1 ...
I0913 11:49:06.963891    3436 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 11:49:06.963896    3436 out.go:358] Setting ErrFile to fd 2...
I0913 11:49:06.963898    3436 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 11:49:06.964029    3436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
I0913 11:49:06.964297    3436 mustload.go:65] Loading cluster: ha-988000
I0913 11:49:06.964541    3436 config.go:182] Loaded profile config "ha-988000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0913 11:49:06.964778    3436 host.go:58] "ha-988000-m02" host status: Stopped
I0913 11:49:06.969208    3436 out.go:177] * Starting "ha-988000-m02" control-plane node in "ha-988000" cluster
I0913 11:49:06.972365    3436 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0913 11:49:06.972376    3436 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0913 11:49:06.972386    3436 cache.go:56] Caching tarball of preloaded images
I0913 11:49:06.972456    3436 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0913 11:49:06.972461    3436 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0913 11:49:06.972518    3436 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/ha-988000/config.json ...
I0913 11:49:06.973301    3436 start.go:360] acquireMachinesLock for ha-988000-m02: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0913 11:49:06.973363    3436 start.go:364] duration metric: took 30.084µs to acquireMachinesLock for "ha-988000-m02"
I0913 11:49:06.973372    3436 start.go:96] Skipping create...Using existing machine configuration
I0913 11:49:06.973376    3436 fix.go:54] fixHost starting: m02
I0913 11:49:06.973482    3436 fix.go:112] recreateIfNeeded on ha-988000-m02: state=Stopped err=<nil>
W0913 11:49:06.973488    3436 fix.go:138] unexpected machine state, will restart: <nil>
I0913 11:49:06.977288    3436 out.go:177] * Restarting existing qemu2 VM for "ha-988000-m02" ...
I0913 11:49:06.981301    3436 qemu.go:418] Using hvf for hardware acceleration
I0913 11:49:06.981339    3436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a2:e7:d9:16:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/disk.qcow2
I0913 11:49:06.983968    3436 main.go:141] libmachine: STDOUT: 
I0913 11:49:06.983985    3436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0913 11:49:06.984010    3436 fix.go:56] duration metric: took 10.632208ms for fixHost
I0913 11:49:06.984017    3436 start.go:83] releasing machines lock for "ha-988000-m02", held for 10.646792ms
W0913 11:49:06.984022    3436 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0913 11:49:06.984055    3436 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0913 11:49:06.984059    3436 start.go:729] Will try again in 5 seconds ...
I0913 11:49:11.985920    3436 start.go:360] acquireMachinesLock for ha-988000-m02: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0913 11:49:11.986034    3436 start.go:364] duration metric: took 96.833µs to acquireMachinesLock for "ha-988000-m02"
I0913 11:49:11.986068    3436 start.go:96] Skipping create...Using existing machine configuration
I0913 11:49:11.986072    3436 fix.go:54] fixHost starting: m02
I0913 11:49:11.986230    3436 fix.go:112] recreateIfNeeded on ha-988000-m02: state=Stopped err=<nil>
W0913 11:49:11.986237    3436 fix.go:138] unexpected machine state, will restart: <nil>
I0913 11:49:11.990638    3436 out.go:177] * Restarting existing qemu2 VM for "ha-988000-m02" ...
I0913 11:49:11.994699    3436 qemu.go:418] Using hvf for hardware acceleration
I0913 11:49:11.994790    3436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a2:e7:d9:16:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m02/disk.qcow2
I0913 11:49:11.996916    3436 main.go:141] libmachine: STDOUT: 
I0913 11:49:11.996934    3436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0913 11:49:11.996955    3436 fix.go:56] duration metric: took 10.883208ms for fixHost
I0913 11:49:11.996958    3436 start.go:83] releasing machines lock for "ha-988000-m02", held for 10.915625ms
W0913 11:49:11.997003    3436 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-988000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-988000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0913 11:49:11.999676    3436 out.go:201] 
W0913 11:49:12.003655    3436 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0913 11:49:12.003661    3436 out.go:270] * 
* 
W0913 11:49:12.005466    3436 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0913 11:49:12.009722    3436 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-988000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr
E0913 11:49:21.079631    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr: exit status 7 (2m57.886577458s)

                                                
                                                
-- stdout --
	ha-988000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-988000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-988000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-988000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 11:49:12.047835    3440 out.go:345] Setting OutFile to fd 1 ...
	I0913 11:49:12.048006    3440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:49:12.048014    3440 out.go:358] Setting ErrFile to fd 2...
	I0913 11:49:12.048016    3440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:49:12.048155    3440 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 11:49:12.048277    3440 out.go:352] Setting JSON to false
	I0913 11:49:12.048291    3440 mustload.go:65] Loading cluster: ha-988000
	I0913 11:49:12.048333    3440 notify.go:220] Checking for updates...
	I0913 11:49:12.048529    3440 config.go:182] Loaded profile config "ha-988000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 11:49:12.048536    3440 status.go:255] checking status of ha-988000 ...
	I0913 11:49:12.049297    3440 status.go:330] ha-988000 host status = "Running" (err=<nil>)
	I0913 11:49:12.049309    3440 host.go:66] Checking if "ha-988000" exists ...
	I0913 11:49:12.049412    3440 host.go:66] Checking if "ha-988000" exists ...
	I0913 11:49:12.049522    3440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 11:49:12.049531    3440 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/id_rsa Username:docker}
	W0913 11:49:12.049718    3440 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0913 11:49:12.049730    3440 retry.go:31] will retry after 285.089575ms: dial tcp 192.168.105.5:22: connect: host is down
	W0913 11:49:12.337042    3440 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0913 11:49:12.337066    3440 retry.go:31] will retry after 481.008542ms: dial tcp 192.168.105.5:22: connect: host is down
	W0913 11:49:12.820225    3440 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0913 11:49:12.820256    3440 retry.go:31] will retry after 752.390947ms: dial tcp 192.168.105.5:22: connect: host is down
	W0913 11:49:13.574574    3440 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0913 11:49:13.574626    3440 retry.go:31] will retry after 191.91026ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0913 11:49:13.768638    3440 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/id_rsa Username:docker}
	W0913 11:49:13.768917    3440 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0913 11:49:13.768929    3440 retry.go:31] will retry after 200.998159ms: dial tcp 192.168.105.5:22: connect: host is down
	W0913 11:49:39.892825    3440 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0913 11:49:39.892912    3440 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0913 11:49:39.892930    3440 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0913 11:49:39.892934    3440 status.go:257] ha-988000 status: &{Name:ha-988000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0913 11:49:39.892953    3440 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0913 11:49:39.892957    3440 status.go:255] checking status of ha-988000-m02 ...
	I0913 11:49:39.893216    3440 status.go:330] ha-988000-m02 host status = "Stopped" (err=<nil>)
	I0913 11:49:39.893222    3440 status.go:343] host is not running, skipping remaining checks
	I0913 11:49:39.893224    3440 status.go:257] ha-988000-m02 status: &{Name:ha-988000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 11:49:39.893228    3440 status.go:255] checking status of ha-988000-m03 ...
	I0913 11:49:39.893959    3440 status.go:330] ha-988000-m03 host status = "Running" (err=<nil>)
	I0913 11:49:39.893967    3440 host.go:66] Checking if "ha-988000-m03" exists ...
	I0913 11:49:39.894082    3440 host.go:66] Checking if "ha-988000-m03" exists ...
	I0913 11:49:39.894219    3440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 11:49:39.894227    3440 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m03/id_rsa Username:docker}
	W0913 11:50:54.892986    3440 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0913 11:50:54.893030    3440 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0913 11:50:54.893038    3440 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0913 11:50:54.893041    3440 status.go:257] ha-988000-m03 status: &{Name:ha-988000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0913 11:50:54.893051    3440 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0913 11:50:54.893055    3440 status.go:255] checking status of ha-988000-m04 ...
	I0913 11:50:54.893776    3440 status.go:330] ha-988000-m04 host status = "Running" (err=<nil>)
	I0913 11:50:54.893784    3440 host.go:66] Checking if "ha-988000-m04" exists ...
	I0913 11:50:54.893890    3440 host.go:66] Checking if "ha-988000-m04" exists ...
	I0913 11:50:54.894013    3440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 11:50:54.894019    3440 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000-m04/id_rsa Username:docker}
	W0913 11:52:09.892923    3440 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0913 11:52:09.892980    3440 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0913 11:52:09.892989    3440 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0913 11:52:09.892993    3440 status.go:257] ha-988000-m04 status: &{Name:ha-988000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0913 11:52:09.893005    3440 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000: exit status 3 (25.960922458s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 11:52:35.853211    3481 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0913 11:52:35.853220    3481 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-988000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (208.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-988000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-988000 -v=7 --alsologtostderr
E0913 11:53:57.041126    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:55:20.127842    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-988000 -v=7 --alsologtostderr: (3m49.021738458s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-988000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-988000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.22951075s)

                                                
                                                
-- stdout --
	* [ha-988000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-988000" primary control-plane node in "ha-988000" cluster
	* Restarting existing qemu2 VM for "ha-988000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-988000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 11:57:43.067603    3622 out.go:345] Setting OutFile to fd 1 ...
	I0913 11:57:43.067831    3622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:57:43.067835    3622 out.go:358] Setting ErrFile to fd 2...
	I0913 11:57:43.067838    3622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:57:43.068025    3622 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 11:57:43.069299    3622 out.go:352] Setting JSON to false
	I0913 11:57:43.088944    3622 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3426,"bootTime":1726250437,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 11:57:43.089014    3622 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 11:57:43.094338    3622 out.go:177] * [ha-988000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 11:57:43.102165    3622 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 11:57:43.102210    3622 notify.go:220] Checking for updates...
	I0913 11:57:43.109193    3622 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 11:57:43.112217    3622 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 11:57:43.113545    3622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 11:57:43.116157    3622 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 11:57:43.119220    3622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 11:57:43.122631    3622 config.go:182] Loaded profile config "ha-988000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 11:57:43.122684    3622 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 11:57:43.127106    3622 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 11:57:43.134182    3622 start.go:297] selected driver: qemu2
	I0913 11:57:43.134188    3622 start.go:901] validating driver "qemu2" against &{Name:ha-988000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-988000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 11:57:43.134277    3622 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 11:57:43.137062    3622 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 11:57:43.137090    3622 cni.go:84] Creating CNI manager for ""
	I0913 11:57:43.137115    3622 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0913 11:57:43.137162    3622 start.go:340] cluster config:
	{Name:ha-988000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-988000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 11:57:43.141410    3622 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 11:57:43.149200    3622 out.go:177] * Starting "ha-988000" primary control-plane node in "ha-988000" cluster
	I0913 11:57:43.153082    3622 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 11:57:43.153099    3622 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 11:57:43.153106    3622 cache.go:56] Caching tarball of preloaded images
	I0913 11:57:43.153170    3622 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 11:57:43.153176    3622 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 11:57:43.153253    3622 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/ha-988000/config.json ...
	I0913 11:57:43.153743    3622 start.go:360] acquireMachinesLock for ha-988000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 11:57:43.153778    3622 start.go:364] duration metric: took 29.333µs to acquireMachinesLock for "ha-988000"
	I0913 11:57:43.153787    3622 start.go:96] Skipping create...Using existing machine configuration
	I0913 11:57:43.153792    3622 fix.go:54] fixHost starting: 
	I0913 11:57:43.153917    3622 fix.go:112] recreateIfNeeded on ha-988000: state=Stopped err=<nil>
	W0913 11:57:43.153927    3622 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 11:57:43.158198    3622 out.go:177] * Restarting existing qemu2 VM for "ha-988000" ...
	I0913 11:57:43.166217    3622 qemu.go:418] Using hvf for hardware acceleration
	I0913 11:57:43.166245    3622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:e0:d6:69:58:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/disk.qcow2
	I0913 11:57:43.168374    3622 main.go:141] libmachine: STDOUT: 
	I0913 11:57:43.168394    3622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 11:57:43.168426    3622 fix.go:56] duration metric: took 14.631583ms for fixHost
	I0913 11:57:43.168430    3622 start.go:83] releasing machines lock for "ha-988000", held for 14.64825ms
	W0913 11:57:43.168436    3622 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 11:57:43.168475    3622 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 11:57:43.168480    3622 start.go:729] Will try again in 5 seconds ...
	I0913 11:57:48.170439    3622 start.go:360] acquireMachinesLock for ha-988000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 11:57:48.170794    3622 start.go:364] duration metric: took 284.791µs to acquireMachinesLock for "ha-988000"
	I0913 11:57:48.170917    3622 start.go:96] Skipping create...Using existing machine configuration
	I0913 11:57:48.170937    3622 fix.go:54] fixHost starting: 
	I0913 11:57:48.171647    3622 fix.go:112] recreateIfNeeded on ha-988000: state=Stopped err=<nil>
	W0913 11:57:48.171672    3622 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 11:57:48.180049    3622 out.go:177] * Restarting existing qemu2 VM for "ha-988000" ...
	I0913 11:57:48.184086    3622 qemu.go:418] Using hvf for hardware acceleration
	I0913 11:57:48.184347    3622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:e0:d6:69:58:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/disk.qcow2
	I0913 11:57:48.193012    3622 main.go:141] libmachine: STDOUT: 
	I0913 11:57:48.193061    3622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 11:57:48.193129    3622 fix.go:56] duration metric: took 22.189625ms for fixHost
	I0913 11:57:48.193142    3622 start.go:83] releasing machines lock for "ha-988000", held for 22.326917ms
	W0913 11:57:48.193317    3622 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-988000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-988000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 11:57:48.202032    3622 out.go:201] 
	W0913 11:57:48.206129    3622 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 11:57:48.206154    3622 out.go:270] * 
	* 
	W0913 11:57:48.208738    3622 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 11:57:48.221025    3622 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-988000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-988000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000: exit status 7 (34.10525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-988000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-988000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.283ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-988000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-988000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 11:57:48.362658    3637 out.go:345] Setting OutFile to fd 1 ...
	I0913 11:57:48.362884    3637 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:57:48.362887    3637 out.go:358] Setting ErrFile to fd 2...
	I0913 11:57:48.362889    3637 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:57:48.363007    3637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 11:57:48.363235    3637 mustload.go:65] Loading cluster: ha-988000
	I0913 11:57:48.363511    3637 config.go:182] Loaded profile config "ha-988000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0913 11:57:48.363830    3637 out.go:270] ! The control-plane node ha-988000 host is not running (will try others): state=Stopped
	! The control-plane node ha-988000 host is not running (will try others): state=Stopped
	W0913 11:57:48.363942    3637 out.go:270] ! The control-plane node ha-988000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-988000-m02 host is not running (will try others): state=Stopped
	I0913 11:57:48.367486    3637 out.go:177] * The control-plane node ha-988000-m03 host is not running: state=Stopped
	I0913 11:57:48.370418    3637 out.go:177]   To start a cluster, run: "minikube start -p ha-988000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-988000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr: exit status 7 (31.265792ms)

                                                
                                                
-- stdout --
	ha-988000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-988000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-988000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-988000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 11:57:48.403612    3639 out.go:345] Setting OutFile to fd 1 ...
	I0913 11:57:48.403746    3639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:57:48.403749    3639 out.go:358] Setting ErrFile to fd 2...
	I0913 11:57:48.403751    3639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:57:48.403877    3639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 11:57:48.404002    3639 out.go:352] Setting JSON to false
	I0913 11:57:48.404011    3639 mustload.go:65] Loading cluster: ha-988000
	I0913 11:57:48.404082    3639 notify.go:220] Checking for updates...
	I0913 11:57:48.404262    3639 config.go:182] Loaded profile config "ha-988000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 11:57:48.404269    3639 status.go:255] checking status of ha-988000 ...
	I0913 11:57:48.404497    3639 status.go:330] ha-988000 host status = "Stopped" (err=<nil>)
	I0913 11:57:48.404500    3639 status.go:343] host is not running, skipping remaining checks
	I0913 11:57:48.404502    3639 status.go:257] ha-988000 status: &{Name:ha-988000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 11:57:48.404512    3639 status.go:255] checking status of ha-988000-m02 ...
	I0913 11:57:48.404598    3639 status.go:330] ha-988000-m02 host status = "Stopped" (err=<nil>)
	I0913 11:57:48.404601    3639 status.go:343] host is not running, skipping remaining checks
	I0913 11:57:48.404603    3639 status.go:257] ha-988000-m02 status: &{Name:ha-988000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 11:57:48.404610    3639 status.go:255] checking status of ha-988000-m03 ...
	I0913 11:57:48.404696    3639 status.go:330] ha-988000-m03 host status = "Stopped" (err=<nil>)
	I0913 11:57:48.404698    3639 status.go:343] host is not running, skipping remaining checks
	I0913 11:57:48.404700    3639 status.go:257] ha-988000-m03 status: &{Name:ha-988000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 11:57:48.404703    3639 status.go:255] checking status of ha-988000-m04 ...
	I0913 11:57:48.404796    3639 status.go:330] ha-988000-m04 host status = "Stopped" (err=<nil>)
	I0913 11:57:48.404798    3639 status.go:343] host is not running, skipping remaining checks
	I0913 11:57:48.404800    3639 status.go:257] ha-988000-m04 status: &{Name:ha-988000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000: exit status 7 (31.056166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-988000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-988000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-988000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-988000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-988000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logvie
wer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":
\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000: exit status 7 (54.3395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-988000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 stop -v=7 --alsologtostderr
E0913 11:58:53.334110    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:58:57.030193    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
E0913 12:00:16.417407    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-988000 stop -v=7 --alsologtostderr: (3m22.000292375s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr: exit status 7 (66.738958ms)

                                                
                                                
-- stdout --
	ha-988000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-988000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-988000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-988000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:01:11.518171    4040 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:01:11.518355    4040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:01:11.518360    4040 out.go:358] Setting ErrFile to fd 2...
	I0913 12:01:11.518363    4040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:01:11.518550    4040 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:01:11.518723    4040 out.go:352] Setting JSON to false
	I0913 12:01:11.518733    4040 mustload.go:65] Loading cluster: ha-988000
	I0913 12:01:11.518775    4040 notify.go:220] Checking for updates...
	I0913 12:01:11.519070    4040 config.go:182] Loaded profile config "ha-988000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:01:11.519091    4040 status.go:255] checking status of ha-988000 ...
	I0913 12:01:11.519387    4040 status.go:330] ha-988000 host status = "Stopped" (err=<nil>)
	I0913 12:01:11.519391    4040 status.go:343] host is not running, skipping remaining checks
	I0913 12:01:11.519394    4040 status.go:257] ha-988000 status: &{Name:ha-988000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 12:01:11.519406    4040 status.go:255] checking status of ha-988000-m02 ...
	I0913 12:01:11.519551    4040 status.go:330] ha-988000-m02 host status = "Stopped" (err=<nil>)
	I0913 12:01:11.519557    4040 status.go:343] host is not running, skipping remaining checks
	I0913 12:01:11.519559    4040 status.go:257] ha-988000-m02 status: &{Name:ha-988000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 12:01:11.519565    4040 status.go:255] checking status of ha-988000-m03 ...
	I0913 12:01:11.519684    4040 status.go:330] ha-988000-m03 host status = "Stopped" (err=<nil>)
	I0913 12:01:11.519689    4040 status.go:343] host is not running, skipping remaining checks
	I0913 12:01:11.519691    4040 status.go:257] ha-988000-m03 status: &{Name:ha-988000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 12:01:11.519696    4040 status.go:255] checking status of ha-988000-m04 ...
	I0913 12:01:11.519820    4040 status.go:330] ha-988000-m04 host status = "Stopped" (err=<nil>)
	I0913 12:01:11.519824    4040 status.go:343] host is not running, skipping remaining checks
	I0913 12:01:11.519826    4040 status.go:257] ha-988000-m04 status: &{Name:ha-988000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr": ha-988000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-988000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-988000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-988000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr": ha-988000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-988000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-988000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-988000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr": ha-988000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-988000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-988000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-988000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000: exit status 7 (32.399416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-988000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-988000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-988000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.177984125s)

                                                
                                                
-- stdout --
	* [ha-988000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-988000" primary control-plane node in "ha-988000" cluster
	* Restarting existing qemu2 VM for "ha-988000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-988000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:01:11.581738    4044 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:01:11.581885    4044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:01:11.581891    4044 out.go:358] Setting ErrFile to fd 2...
	I0913 12:01:11.581901    4044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:01:11.582045    4044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:01:11.583039    4044 out.go:352] Setting JSON to false
	I0913 12:01:11.598908    4044 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3634,"bootTime":1726250437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:01:11.598996    4044 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:01:11.603857    4044 out.go:177] * [ha-988000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:01:11.610798    4044 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:01:11.610839    4044 notify.go:220] Checking for updates...
	I0913 12:01:11.617711    4044 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:01:11.620753    4044 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:01:11.623778    4044 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:01:11.626780    4044 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:01:11.629766    4044 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:01:11.633053    4044 config.go:182] Loaded profile config "ha-988000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:01:11.633311    4044 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:01:11.637627    4044 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 12:01:11.644797    4044 start.go:297] selected driver: qemu2
	I0913 12:01:11.644805    4044 start.go:901] validating driver "qemu2" against &{Name:ha-988000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-988000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:01:11.644912    4044 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:01:11.647314    4044 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:01:11.647345    4044 cni.go:84] Creating CNI manager for ""
	I0913 12:01:11.647365    4044 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0913 12:01:11.647417    4044 start.go:340] cluster config:
	{Name:ha-988000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-988000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:01:11.651007    4044 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:01:11.659756    4044 out.go:177] * Starting "ha-988000" primary control-plane node in "ha-988000" cluster
	I0913 12:01:11.662729    4044 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:01:11.662743    4044 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:01:11.662752    4044 cache.go:56] Caching tarball of preloaded images
	I0913 12:01:11.662810    4044 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:01:11.662816    4044 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:01:11.662884    4044 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/ha-988000/config.json ...
	I0913 12:01:11.663351    4044 start.go:360] acquireMachinesLock for ha-988000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:01:11.663386    4044 start.go:364] duration metric: took 29.042µs to acquireMachinesLock for "ha-988000"
	I0913 12:01:11.663395    4044 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:01:11.663401    4044 fix.go:54] fixHost starting: 
	I0913 12:01:11.663533    4044 fix.go:112] recreateIfNeeded on ha-988000: state=Stopped err=<nil>
	W0913 12:01:11.663541    4044 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:01:11.667769    4044 out.go:177] * Restarting existing qemu2 VM for "ha-988000" ...
	I0913 12:01:11.675738    4044 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:01:11.675781    4044 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:e0:d6:69:58:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/disk.qcow2
	I0913 12:01:11.677804    4044 main.go:141] libmachine: STDOUT: 
	I0913 12:01:11.677826    4044 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:01:11.677857    4044 fix.go:56] duration metric: took 14.455833ms for fixHost
	I0913 12:01:11.677861    4044 start.go:83] releasing machines lock for "ha-988000", held for 14.470833ms
	W0913 12:01:11.677866    4044 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:01:11.677900    4044 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:01:11.677905    4044 start.go:729] Will try again in 5 seconds ...
	I0913 12:01:16.678349    4044 start.go:360] acquireMachinesLock for ha-988000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:01:16.678751    4044 start.go:364] duration metric: took 295.792µs to acquireMachinesLock for "ha-988000"
	I0913 12:01:16.678866    4044 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:01:16.678884    4044 fix.go:54] fixHost starting: 
	I0913 12:01:16.679628    4044 fix.go:112] recreateIfNeeded on ha-988000: state=Stopped err=<nil>
	W0913 12:01:16.679653    4044 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:01:16.684078    4044 out.go:177] * Restarting existing qemu2 VM for "ha-988000" ...
	I0913 12:01:16.688028    4044 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:01:16.688268    4044 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:e0:d6:69:58:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/ha-988000/disk.qcow2
	I0913 12:01:16.697034    4044 main.go:141] libmachine: STDOUT: 
	I0913 12:01:16.697122    4044 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:01:16.697202    4044 fix.go:56] duration metric: took 18.315042ms for fixHost
	I0913 12:01:16.697220    4044 start.go:83] releasing machines lock for "ha-988000", held for 18.446625ms
	W0913 12:01:16.697425    4044 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-988000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-988000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:01:16.705032    4044 out.go:201] 
	W0913 12:01:16.708892    4044 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:01:16.708922    4044 out.go:270] * 
	* 
	W0913 12:01:16.711426    4044 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:01:16.719237    4044 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-988000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000: exit status 7 (71.303375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-988000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-988000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-988000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-988000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-988000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logvie
wer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":
\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000: exit status 7 (30.865084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-988000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-988000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-988000 --control-plane -v=7 --alsologtostderr: exit status 83 (43.699125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-988000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-988000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:01:16.916634    4059 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:01:16.916807    4059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:01:16.916810    4059 out.go:358] Setting ErrFile to fd 2...
	I0913 12:01:16.916813    4059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:01:16.916928    4059 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:01:16.917142    4059 mustload.go:65] Loading cluster: ha-988000
	I0913 12:01:16.917381    4059 config.go:182] Loaded profile config "ha-988000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0913 12:01:16.917687    4059 out.go:270] ! The control-plane node ha-988000 host is not running (will try others): state=Stopped
	! The control-plane node ha-988000 host is not running (will try others): state=Stopped
	W0913 12:01:16.917785    4059 out.go:270] ! The control-plane node ha-988000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-988000-m02 host is not running (will try others): state=Stopped
	I0913 12:01:16.922450    4059 out.go:177] * The control-plane node ha-988000-m03 host is not running: state=Stopped
	I0913 12:01:16.926424    4059 out.go:177]   To start a cluster, run: "minikube start -p ha-988000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-988000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-988000 -n ha-988000: exit status 7 (30.810125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-988000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-537000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-537000 --driver=qemu2 : exit status 80 (9.854231666s)

                                                
                                                
-- stdout --
	* [image-537000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-537000" primary control-plane node in "image-537000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-537000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-537000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-537000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-537000 -n image-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-537000 -n image-537000: exit status 7 (70.460792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-537000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-613000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-613000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.903095s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"704e8817-5b78-4437-815c-b1a9c582e2c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-613000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae2c021f-5fee-4041-bf97-59b0f9925689","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19636"}}
	{"specversion":"1.0","id":"4d149fcf-2e12-4b2b-ac2d-04912e85cfa7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig"}}
	{"specversion":"1.0","id":"25de39ce-d9c2-408d-960c-8befefa9a54b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"86a261f1-cd54-4dc5-93dc-3aecc868e549","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d547c92e-a2c2-4035-a189-45a9b2945a40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube"}}
	{"specversion":"1.0","id":"1a3d8e4e-6a45-46b7-9cf5-a3ba6da71ddd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0f9fef5b-115e-4de1-a78b-da534e023abe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"988438ab-0cc7-4be1-85c4-5bcb19037174","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"eede7abe-2442-416f-be03-b641f6a8c99e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-613000\" primary control-plane node in \"json-output-613000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b82ae362-4313-43df-9cac-b36e5599c3b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"25d813d1-24d8-4a13-81e1-c5aafdb7c359","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-613000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"28d135a9-1ad4-477a-b1be-4153f184263c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"e1b4392f-c463-479e-8210-8728bc5856f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"2894655c-0f9b-44a3-b12d-db9303b7b7be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-613000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"23e0d9cc-ad0f-4d56-a1b0-b03f6021fff9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"9c780b9d-fc9e-4104-a7c5-8af924d19cbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-613000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.90s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-613000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-613000 --output=json --user=testUser: exit status 83 (78.499458ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8851bd0e-0182-44e1-ae81-8886386e16c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-613000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"3aee8316-3eae-476c-a759-1025253d0b76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-613000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-613000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-613000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-613000 --output=json --user=testUser: exit status 83 (44.343125ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-613000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-613000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-613000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-613000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-911000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-911000 --driver=qemu2 : exit status 80 (9.812877125s)

                                                
                                                
-- stdout --
	* [first-911000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-911000" primary control-plane node in "first-911000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-911000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-911000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-911000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-13 12:01:51.111043 -0700 PDT m=+2519.562380543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-913000 -n second-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-913000 -n second-913000: exit status 85 (79.750416ms)

                                                
                                                
-- stdout --
	* Profile "second-913000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-913000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-913000" host is not running, skipping log retrieval (state="* Profile \"second-913000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-913000\"")
helpers_test.go:175: Cleaning up "second-913000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-913000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-13 12:01:51.298979 -0700 PDT m=+2519.750324501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-911000 -n first-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-911000 -n first-911000: exit status 7 (31.156916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-911000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-911000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-911000
--- FAIL: TestMinikubeProfile (10.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-944000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-944000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.869178875s)

                                                
                                                
-- stdout --
	* [mount-start-1-944000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-944000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-944000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-944000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-944000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-944000 -n mount-start-1-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-944000 -n mount-start-1-944000: exit status 7 (68.419625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.94s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-816000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-816000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.823226666s)

                                                
                                                
-- stdout --
	* [multinode-816000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-816000" primary control-plane node in "multinode-816000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-816000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:02:01.569599    4203 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:02:01.569717    4203 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:02:01.569721    4203 out.go:358] Setting ErrFile to fd 2...
	I0913 12:02:01.569724    4203 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:02:01.569857    4203 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:02:01.570954    4203 out.go:352] Setting JSON to false
	I0913 12:02:01.586924    4203 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3684,"bootTime":1726250437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:02:01.586992    4203 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:02:01.594025    4203 out.go:177] * [multinode-816000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:02:01.601914    4203 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:02:01.601982    4203 notify.go:220] Checking for updates...
	I0913 12:02:01.609821    4203 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:02:01.612886    4203 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:02:01.615931    4203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:02:01.618920    4203 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:02:01.621953    4203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:02:01.625131    4203 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:02:01.628772    4203 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:02:01.635916    4203 start.go:297] selected driver: qemu2
	I0913 12:02:01.635922    4203 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:02:01.635931    4203 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:02:01.638206    4203 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:02:01.639942    4203 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:02:01.643016    4203 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:02:01.643038    4203 cni.go:84] Creating CNI manager for ""
	I0913 12:02:01.643060    4203 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0913 12:02:01.643064    4203 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0913 12:02:01.643107    4203 start.go:340] cluster config:
	{Name:multinode-816000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:02:01.646790    4203 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:02:01.654865    4203 out.go:177] * Starting "multinode-816000" primary control-plane node in "multinode-816000" cluster
	I0913 12:02:01.658929    4203 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:02:01.658946    4203 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:02:01.658958    4203 cache.go:56] Caching tarball of preloaded images
	I0913 12:02:01.659034    4203 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:02:01.659039    4203 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:02:01.659258    4203 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/multinode-816000/config.json ...
	I0913 12:02:01.659270    4203 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/multinode-816000/config.json: {Name:mkd180778edd19590a5cf6a3a6001e012a79d383 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:02:01.659497    4203 start.go:360] acquireMachinesLock for multinode-816000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:02:01.659534    4203 start.go:364] duration metric: took 30µs to acquireMachinesLock for "multinode-816000"
	I0913 12:02:01.659544    4203 start.go:93] Provisioning new machine with config: &{Name:multinode-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:02:01.659570    4203 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:02:01.667932    4203 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:02:01.686221    4203 start.go:159] libmachine.API.Create for "multinode-816000" (driver="qemu2")
	I0913 12:02:01.686255    4203 client.go:168] LocalClient.Create starting
	I0913 12:02:01.686322    4203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:02:01.686356    4203 main.go:141] libmachine: Decoding PEM data...
	I0913 12:02:01.686367    4203 main.go:141] libmachine: Parsing certificate...
	I0913 12:02:01.686403    4203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:02:01.686429    4203 main.go:141] libmachine: Decoding PEM data...
	I0913 12:02:01.686439    4203 main.go:141] libmachine: Parsing certificate...
	I0913 12:02:01.686809    4203 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:02:01.849064    4203 main.go:141] libmachine: Creating SSH key...
	I0913 12:02:01.964451    4203 main.go:141] libmachine: Creating Disk image...
	I0913 12:02:01.964456    4203 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:02:01.964653    4203 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2
	I0913 12:02:01.973686    4203 main.go:141] libmachine: STDOUT: 
	I0913 12:02:01.973704    4203 main.go:141] libmachine: STDERR: 
	I0913 12:02:01.973774    4203 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2 +20000M
	I0913 12:02:01.981498    4203 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:02:01.981514    4203 main.go:141] libmachine: STDERR: 
	I0913 12:02:01.981526    4203 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2
	I0913 12:02:01.981535    4203 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:02:01.981548    4203 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:02:01.981579    4203 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:d9:b8:4e:0a:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2
	I0913 12:02:01.983199    4203 main.go:141] libmachine: STDOUT: 
	I0913 12:02:01.983221    4203 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:02:01.983243    4203 client.go:171] duration metric: took 296.9935ms to LocalClient.Create
	I0913 12:02:03.985354    4203 start.go:128] duration metric: took 2.325855583s to createHost
	I0913 12:02:03.985474    4203 start.go:83] releasing machines lock for "multinode-816000", held for 2.326017417s
	W0913 12:02:03.985525    4203 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:02:03.992709    4203 out.go:177] * Deleting "multinode-816000" in qemu2 ...
	W0913 12:02:04.024431    4203 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:02:04.024454    4203 start.go:729] Will try again in 5 seconds ...
	I0913 12:02:09.026465    4203 start.go:360] acquireMachinesLock for multinode-816000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:02:09.026999    4203 start.go:364] duration metric: took 448.417µs to acquireMachinesLock for "multinode-816000"
	I0913 12:02:09.027175    4203 start.go:93] Provisioning new machine with config: &{Name:multinode-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:02:09.027455    4203 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:02:09.044893    4203 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:02:09.096043    4203 start.go:159] libmachine.API.Create for "multinode-816000" (driver="qemu2")
	I0913 12:02:09.096090    4203 client.go:168] LocalClient.Create starting
	I0913 12:02:09.096222    4203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:02:09.096279    4203 main.go:141] libmachine: Decoding PEM data...
	I0913 12:02:09.096296    4203 main.go:141] libmachine: Parsing certificate...
	I0913 12:02:09.096363    4203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:02:09.096406    4203 main.go:141] libmachine: Decoding PEM data...
	I0913 12:02:09.096418    4203 main.go:141] libmachine: Parsing certificate...
	I0913 12:02:09.096935    4203 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:02:09.264403    4203 main.go:141] libmachine: Creating SSH key...
	I0913 12:02:09.297802    4203 main.go:141] libmachine: Creating Disk image...
	I0913 12:02:09.297807    4203 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:02:09.298000    4203 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2
	I0913 12:02:09.307096    4203 main.go:141] libmachine: STDOUT: 
	I0913 12:02:09.307116    4203 main.go:141] libmachine: STDERR: 
	I0913 12:02:09.307173    4203 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2 +20000M
	I0913 12:02:09.315068    4203 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:02:09.315082    4203 main.go:141] libmachine: STDERR: 
	I0913 12:02:09.315092    4203 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2
	I0913 12:02:09.315097    4203 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:02:09.315108    4203 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:02:09.315145    4203 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:21:a5:f9:c6:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2
	I0913 12:02:09.316737    4203 main.go:141] libmachine: STDOUT: 
	I0913 12:02:09.316751    4203 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:02:09.316764    4203 client.go:171] duration metric: took 220.676167ms to LocalClient.Create
	I0913 12:02:11.318853    4203 start.go:128] duration metric: took 2.291459166s to createHost
	I0913 12:02:11.318923    4203 start.go:83] releasing machines lock for "multinode-816000", held for 2.291947042s
	W0913 12:02:11.319356    4203 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-816000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:02:11.333922    4203 out.go:201] 
	W0913 12:02:11.338090    4203 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:02:11.338117    4203 out.go:270] * 
	* 
	W0913 12:02:11.341000    4203 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:02:11.349909    4203 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-816000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000: exit status 7 (68.743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (115.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (127.530375ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-816000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- rollout status deployment/busybox: exit status 1 (59.154541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.777792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.419583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.312959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.99675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.361792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.25925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.072583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.07475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.62425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.40925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0913 12:03:53.322401    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 12:03:57.019226    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.8665ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.862458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.017542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.37375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.162666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000: exit status 7 (30.530208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (115.98s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-816000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.927125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000: exit status 7 (30.430958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-816000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-816000 -v 3 --alsologtostderr: exit status 83 (42.577958ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-816000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-816000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:04:07.531749    4301 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:04:07.531906    4301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:07.531909    4301 out.go:358] Setting ErrFile to fd 2...
	I0913 12:04:07.531912    4301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:07.532029    4301 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:04:07.532263    4301 mustload.go:65] Loading cluster: multinode-816000
	I0913 12:04:07.532489    4301 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:04:07.536433    4301 out.go:177] * The control-plane node multinode-816000 host is not running: state=Stopped
	I0913 12:04:07.540422    4301 out.go:177]   To start a cluster, run: "minikube start -p multinode-816000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-816000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000: exit status 7 (29.877917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-816000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-816000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.317875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-816000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-816000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-816000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000: exit status 7 (30.919958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-816000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-816000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-816000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-816000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000: exit status 7 (30.455916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 status --output json --alsologtostderr: exit status 7 (30.496334ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-816000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:04:07.739834    4313 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:04:07.739981    4313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:07.739984    4313 out.go:358] Setting ErrFile to fd 2...
	I0913 12:04:07.739986    4313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:07.740118    4313 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:04:07.740241    4313 out.go:352] Setting JSON to true
	I0913 12:04:07.740250    4313 mustload.go:65] Loading cluster: multinode-816000
	I0913 12:04:07.740315    4313 notify.go:220] Checking for updates...
	I0913 12:04:07.740452    4313 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:04:07.740458    4313 status.go:255] checking status of multinode-816000 ...
	I0913 12:04:07.740699    4313 status.go:330] multinode-816000 host status = "Stopped" (err=<nil>)
	I0913 12:04:07.740702    4313 status.go:343] host is not running, skipping remaining checks
	I0913 12:04:07.740705    4313 status.go:257] multinode-816000 status: &{Name:multinode-816000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-816000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000: exit status 7 (30.130958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 node stop m03: exit status 85 (47.91975ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-816000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 status: exit status 7 (31.040541ms)

                                                
                                                
-- stdout --
	multinode-816000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 status --alsologtostderr: exit status 7 (30.303333ms)

                                                
                                                
-- stdout --
	multinode-816000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:04:07.880092    4321 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:04:07.880230    4321 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:07.880233    4321 out.go:358] Setting ErrFile to fd 2...
	I0913 12:04:07.880235    4321 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:07.880363    4321 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:04:07.880481    4321 out.go:352] Setting JSON to false
	I0913 12:04:07.880488    4321 mustload.go:65] Loading cluster: multinode-816000
	I0913 12:04:07.880545    4321 notify.go:220] Checking for updates...
	I0913 12:04:07.880692    4321 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:04:07.880698    4321 status.go:255] checking status of multinode-816000 ...
	I0913 12:04:07.880917    4321 status.go:330] multinode-816000 host status = "Stopped" (err=<nil>)
	I0913 12:04:07.880920    4321 status.go:343] host is not running, skipping remaining checks
	I0913 12:04:07.880922    4321 status.go:257] multinode-816000 status: &{Name:multinode-816000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-816000 status --alsologtostderr": multinode-816000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000: exit status 7 (30.019959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (54.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.197292ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:04:07.941244    4325 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:04:07.941528    4325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:07.941532    4325 out.go:358] Setting ErrFile to fd 2...
	I0913 12:04:07.941534    4325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:07.941666    4325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:04:07.941884    4325 mustload.go:65] Loading cluster: multinode-816000
	I0913 12:04:07.942088    4325 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:04:07.946418    4325 out.go:201] 
	W0913 12:04:07.949448    4325 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0913 12:04:07.949453    4325 out.go:270] * 
	* 
	W0913 12:04:07.951173    4325 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:04:07.952564    4325 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0913 12:04:07.941244    4325 out.go:345] Setting OutFile to fd 1 ...
I0913 12:04:07.941528    4325 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 12:04:07.941532    4325 out.go:358] Setting ErrFile to fd 2...
I0913 12:04:07.941534    4325 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 12:04:07.941666    4325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
I0913 12:04:07.941884    4325 mustload.go:65] Loading cluster: multinode-816000
I0913 12:04:07.942088    4325 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 12:04:07.946418    4325 out.go:201] 
W0913 12:04:07.949448    4325 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0913 12:04:07.949453    4325 out.go:270] * 
* 
W0913 12:04:07.951173    4325 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0913 12:04:07.952564    4325 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-816000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr: exit status 7 (30.375ms)

                                                
                                                
-- stdout --
	multinode-816000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:04:07.986069    4327 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:04:07.986231    4327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:07.986235    4327 out.go:358] Setting ErrFile to fd 2...
	I0913 12:04:07.986237    4327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:07.986367    4327 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:04:07.986488    4327 out.go:352] Setting JSON to false
	I0913 12:04:07.986499    4327 mustload.go:65] Loading cluster: multinode-816000
	I0913 12:04:07.986564    4327 notify.go:220] Checking for updates...
	I0913 12:04:07.986697    4327 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:04:07.986703    4327 status.go:255] checking status of multinode-816000 ...
	I0913 12:04:07.986945    4327 status.go:330] multinode-816000 host status = "Stopped" (err=<nil>)
	I0913 12:04:07.986948    4327 status.go:343] host is not running, skipping remaining checks
	I0913 12:04:07.986950    4327 status.go:257] multinode-816000 status: &{Name:multinode-816000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr: exit status 7 (71.782791ms)

                                                
                                                
-- stdout --
	multinode-816000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:04:09.438484    4329 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:04:09.438694    4329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:09.438699    4329 out.go:358] Setting ErrFile to fd 2...
	I0913 12:04:09.438702    4329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:09.438865    4329 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:04:09.439053    4329 out.go:352] Setting JSON to false
	I0913 12:04:09.439063    4329 mustload.go:65] Loading cluster: multinode-816000
	I0913 12:04:09.439118    4329 notify.go:220] Checking for updates...
	I0913 12:04:09.439344    4329 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:04:09.439358    4329 status.go:255] checking status of multinode-816000 ...
	I0913 12:04:09.439666    4329 status.go:330] multinode-816000 host status = "Stopped" (err=<nil>)
	I0913 12:04:09.439670    4329 status.go:343] host is not running, skipping remaining checks
	I0913 12:04:09.439673    4329 status.go:257] multinode-816000 status: &{Name:multinode-816000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr: exit status 7 (73.958875ms)

                                                
                                                
-- stdout --
	multinode-816000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:04:11.421109    4333 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:04:11.421322    4333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:11.421327    4333 out.go:358] Setting ErrFile to fd 2...
	I0913 12:04:11.421330    4333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:11.421495    4333 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:04:11.421648    4333 out.go:352] Setting JSON to false
	I0913 12:04:11.421658    4333 mustload.go:65] Loading cluster: multinode-816000
	I0913 12:04:11.421707    4333 notify.go:220] Checking for updates...
	I0913 12:04:11.421951    4333 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:04:11.421960    4333 status.go:255] checking status of multinode-816000 ...
	I0913 12:04:11.422291    4333 status.go:330] multinode-816000 host status = "Stopped" (err=<nil>)
	I0913 12:04:11.422296    4333 status.go:343] host is not running, skipping remaining checks
	I0913 12:04:11.422299    4333 status.go:257] multinode-816000 status: &{Name:multinode-816000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr: exit status 7 (73.48025ms)

                                                
                                                
-- stdout --
	multinode-816000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:04:13.909904    4335 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:04:13.910107    4335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:13.910111    4335 out.go:358] Setting ErrFile to fd 2...
	I0913 12:04:13.910114    4335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:13.910292    4335 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:04:13.910444    4335 out.go:352] Setting JSON to false
	I0913 12:04:13.910454    4335 mustload.go:65] Loading cluster: multinode-816000
	I0913 12:04:13.910493    4335 notify.go:220] Checking for updates...
	I0913 12:04:13.910729    4335 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:04:13.910739    4335 status.go:255] checking status of multinode-816000 ...
	I0913 12:04:13.911056    4335 status.go:330] multinode-816000 host status = "Stopped" (err=<nil>)
	I0913 12:04:13.911061    4335 status.go:343] host is not running, skipping remaining checks
	I0913 12:04:13.911064    4335 status.go:257] multinode-816000 status: &{Name:multinode-816000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr: exit status 7 (74.106583ms)

                                                
                                                
-- stdout --
	multinode-816000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:04:16.658444    4337 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:04:16.658649    4337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:16.658654    4337 out.go:358] Setting ErrFile to fd 2...
	I0913 12:04:16.658657    4337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:16.658828    4337 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:04:16.658978    4337 out.go:352] Setting JSON to false
	I0913 12:04:16.658989    4337 mustload.go:65] Loading cluster: multinode-816000
	I0913 12:04:16.659025    4337 notify.go:220] Checking for updates...
	I0913 12:04:16.659244    4337 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:04:16.659252    4337 status.go:255] checking status of multinode-816000 ...
	I0913 12:04:16.659563    4337 status.go:330] multinode-816000 host status = "Stopped" (err=<nil>)
	I0913 12:04:16.659568    4337 status.go:343] host is not running, skipping remaining checks
	I0913 12:04:16.659571    4337 status.go:257] multinode-816000 status: &{Name:multinode-816000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr: exit status 7 (75.541416ms)

                                                
                                                
-- stdout --
	multinode-816000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:04:19.287563    4339 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:04:19.287765    4339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:19.287770    4339 out.go:358] Setting ErrFile to fd 2...
	I0913 12:04:19.287774    4339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:19.287953    4339 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:04:19.288119    4339 out.go:352] Setting JSON to false
	I0913 12:04:19.288131    4339 mustload.go:65] Loading cluster: multinode-816000
	I0913 12:04:19.288180    4339 notify.go:220] Checking for updates...
	I0913 12:04:19.288436    4339 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:04:19.288444    4339 status.go:255] checking status of multinode-816000 ...
	I0913 12:04:19.288746    4339 status.go:330] multinode-816000 host status = "Stopped" (err=<nil>)
	I0913 12:04:19.288751    4339 status.go:343] host is not running, skipping remaining checks
	I0913 12:04:19.288754    4339 status.go:257] multinode-816000 status: &{Name:multinode-816000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr: exit status 7 (75.373959ms)

                                                
                                                
-- stdout --
	multinode-816000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:04:25.358514    4341 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:04:25.358714    4341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:25.358719    4341 out.go:358] Setting ErrFile to fd 2...
	I0913 12:04:25.358722    4341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:25.358900    4341 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:04:25.359044    4341 out.go:352] Setting JSON to false
	I0913 12:04:25.359054    4341 mustload.go:65] Loading cluster: multinode-816000
	I0913 12:04:25.359093    4341 notify.go:220] Checking for updates...
	I0913 12:04:25.359311    4341 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:04:25.359318    4341 status.go:255] checking status of multinode-816000 ...
	I0913 12:04:25.359633    4341 status.go:330] multinode-816000 host status = "Stopped" (err=<nil>)
	I0913 12:04:25.359638    4341 status.go:343] host is not running, skipping remaining checks
	I0913 12:04:25.359641    4341 status.go:257] multinode-816000 status: &{Name:multinode-816000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr: exit status 7 (74.097209ms)

                                                
                                                
-- stdout --
	multinode-816000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:04:37.933973    4344 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:04:37.934148    4344 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:37.934152    4344 out.go:358] Setting ErrFile to fd 2...
	I0913 12:04:37.934156    4344 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:04:37.934332    4344 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:04:37.934501    4344 out.go:352] Setting JSON to false
	I0913 12:04:37.934512    4344 mustload.go:65] Loading cluster: multinode-816000
	I0913 12:04:37.934560    4344 notify.go:220] Checking for updates...
	I0913 12:04:37.934801    4344 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:04:37.934809    4344 status.go:255] checking status of multinode-816000 ...
	I0913 12:04:37.935130    4344 status.go:330] multinode-816000 host status = "Stopped" (err=<nil>)
	I0913 12:04:37.935135    4344 status.go:343] host is not running, skipping remaining checks
	I0913 12:04:37.935138    4344 status.go:257] multinode-816000 status: &{Name:multinode-816000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr: exit status 7 (76.649291ms)

                                                
                                                
-- stdout --
	multinode-816000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:05:02.552667    4351 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:05:02.552843    4351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:05:02.552848    4351 out.go:358] Setting ErrFile to fd 2...
	I0913 12:05:02.552851    4351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:05:02.553012    4351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:05:02.553187    4351 out.go:352] Setting JSON to false
	I0913 12:05:02.553199    4351 mustload.go:65] Loading cluster: multinode-816000
	I0913 12:05:02.553231    4351 notify.go:220] Checking for updates...
	I0913 12:05:02.553485    4351 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:05:02.553494    4351 status.go:255] checking status of multinode-816000 ...
	I0913 12:05:02.553806    4351 status.go:330] multinode-816000 host status = "Stopped" (err=<nil>)
	I0913 12:05:02.553811    4351 status.go:343] host is not running, skipping remaining checks
	I0913 12:05:02.553814    4351 status.go:257] multinode-816000 status: &{Name:multinode-816000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-816000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000: exit status 7 (33.081958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (54.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-816000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-816000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-816000: (3.724796083s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-816000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-816000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.224204708s)

                                                
                                                
-- stdout --
	* [multinode-816000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-816000" primary control-plane node in "multinode-816000" cluster
	* Restarting existing qemu2 VM for "multinode-816000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-816000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:05:06.405668    4375 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:05:06.405836    4375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:05:06.405841    4375 out.go:358] Setting ErrFile to fd 2...
	I0913 12:05:06.405844    4375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:05:06.406008    4375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:05:06.407192    4375 out.go:352] Setting JSON to false
	I0913 12:05:06.426199    4375 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3869,"bootTime":1726250437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:05:06.426270    4375 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:05:06.431140    4375 out.go:177] * [multinode-816000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:05:06.438205    4375 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:05:06.438236    4375 notify.go:220] Checking for updates...
	I0913 12:05:06.445158    4375 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:05:06.448177    4375 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:05:06.451091    4375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:05:06.454114    4375 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:05:06.457138    4375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:05:06.460410    4375 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:05:06.460467    4375 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:05:06.465130    4375 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 12:05:06.472052    4375 start.go:297] selected driver: qemu2
	I0913 12:05:06.472056    4375 start.go:901] validating driver "qemu2" against &{Name:multinode-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:05:06.472103    4375 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:05:06.474618    4375 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:05:06.474642    4375 cni.go:84] Creating CNI manager for ""
	I0913 12:05:06.474668    4375 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0913 12:05:06.474724    4375 start.go:340] cluster config:
	{Name:multinode-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-816000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:05:06.478402    4375 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:05:06.484975    4375 out.go:177] * Starting "multinode-816000" primary control-plane node in "multinode-816000" cluster
	I0913 12:05:06.489162    4375 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:05:06.489177    4375 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:05:06.489188    4375 cache.go:56] Caching tarball of preloaded images
	I0913 12:05:06.489253    4375 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:05:06.489260    4375 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:05:06.489325    4375 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/multinode-816000/config.json ...
	I0913 12:05:06.489803    4375 start.go:360] acquireMachinesLock for multinode-816000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:05:06.489837    4375 start.go:364] duration metric: took 27.959µs to acquireMachinesLock for "multinode-816000"
	I0913 12:05:06.489846    4375 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:05:06.489851    4375 fix.go:54] fixHost starting: 
	I0913 12:05:06.489982    4375 fix.go:112] recreateIfNeeded on multinode-816000: state=Stopped err=<nil>
	W0913 12:05:06.489990    4375 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:05:06.498156    4375 out.go:177] * Restarting existing qemu2 VM for "multinode-816000" ...
	I0913 12:05:06.502059    4375 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:05:06.502099    4375 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:21:a5:f9:c6:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2
	I0913 12:05:06.504126    4375 main.go:141] libmachine: STDOUT: 
	I0913 12:05:06.504150    4375 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:05:06.504183    4375 fix.go:56] duration metric: took 14.32975ms for fixHost
	I0913 12:05:06.504187    4375 start.go:83] releasing machines lock for "multinode-816000", held for 14.346792ms
	W0913 12:05:06.504193    4375 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:05:06.504228    4375 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:05:06.504233    4375 start.go:729] Will try again in 5 seconds ...
	I0913 12:05:11.506236    4375 start.go:360] acquireMachinesLock for multinode-816000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:05:11.506626    4375 start.go:364] duration metric: took 303.333µs to acquireMachinesLock for "multinode-816000"
	I0913 12:05:11.506731    4375 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:05:11.506751    4375 fix.go:54] fixHost starting: 
	I0913 12:05:11.507461    4375 fix.go:112] recreateIfNeeded on multinode-816000: state=Stopped err=<nil>
	W0913 12:05:11.507487    4375 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:05:11.512422    4375 out.go:177] * Restarting existing qemu2 VM for "multinode-816000" ...
	I0913 12:05:11.521639    4375 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:05:11.521919    4375 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:21:a5:f9:c6:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2
	I0913 12:05:11.530865    4375 main.go:141] libmachine: STDOUT: 
	I0913 12:05:11.530932    4375 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:05:11.531007    4375 fix.go:56] duration metric: took 24.259667ms for fixHost
	I0913 12:05:11.531024    4375 start.go:83] releasing machines lock for "multinode-816000", held for 24.377334ms
	W0913 12:05:11.531227    4375 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-816000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-816000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:05:11.539604    4375 out.go:201] 
	W0913 12:05:11.543589    4375 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:05:11.543634    4375 out.go:270] * 
	* 
	W0913 12:05:11.545959    4375 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:05:11.554526    4375 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-816000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-816000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000: exit status 7 (33.012541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 node delete m03: exit status 83 (40.307166ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-816000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-816000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-816000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 status --alsologtostderr: exit status 7 (30.737333ms)

                                                
                                                
-- stdout --
	multinode-816000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:05:11.741875    4390 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:05:11.742030    4390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:05:11.742033    4390 out.go:358] Setting ErrFile to fd 2...
	I0913 12:05:11.742036    4390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:05:11.742172    4390 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:05:11.742285    4390 out.go:352] Setting JSON to false
	I0913 12:05:11.742293    4390 mustload.go:65] Loading cluster: multinode-816000
	I0913 12:05:11.742345    4390 notify.go:220] Checking for updates...
	I0913 12:05:11.742490    4390 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:05:11.742496    4390 status.go:255] checking status of multinode-816000 ...
	I0913 12:05:11.742723    4390 status.go:330] multinode-816000 host status = "Stopped" (err=<nil>)
	I0913 12:05:11.742727    4390 status.go:343] host is not running, skipping remaining checks
	I0913 12:05:11.742729    4390 status.go:257] multinode-816000 status: &{Name:multinode-816000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-816000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000: exit status 7 (30.609459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-816000 stop: (3.4458385s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 status: exit status 7 (65.55375ms)

                                                
                                                
-- stdout --
	multinode-816000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-816000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-816000 status --alsologtostderr: exit status 7 (32.70175ms)

                                                
                                                
-- stdout --
	multinode-816000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:05:15.316916    4416 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:05:15.317056    4416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:05:15.317059    4416 out.go:358] Setting ErrFile to fd 2...
	I0913 12:05:15.317062    4416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:05:15.317204    4416 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:05:15.317341    4416 out.go:352] Setting JSON to false
	I0913 12:05:15.317356    4416 mustload.go:65] Loading cluster: multinode-816000
	I0913 12:05:15.317415    4416 notify.go:220] Checking for updates...
	I0913 12:05:15.317568    4416 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:05:15.317574    4416 status.go:255] checking status of multinode-816000 ...
	I0913 12:05:15.317814    4416 status.go:330] multinode-816000 host status = "Stopped" (err=<nil>)
	I0913 12:05:15.317817    4416 status.go:343] host is not running, skipping remaining checks
	I0913 12:05:15.317819    4416 status.go:257] multinode-816000 status: &{Name:multinode-816000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-816000 status --alsologtostderr": multinode-816000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-816000 status --alsologtostderr": multinode-816000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000: exit status 7 (31.263167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-816000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-816000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.183542583s)

                                                
                                                
-- stdout --
	* [multinode-816000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-816000" primary control-plane node in "multinode-816000" cluster
	* Restarting existing qemu2 VM for "multinode-816000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-816000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:05:15.379296    4420 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:05:15.379442    4420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:05:15.379445    4420 out.go:358] Setting ErrFile to fd 2...
	I0913 12:05:15.379447    4420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:05:15.379587    4420 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:05:15.380597    4420 out.go:352] Setting JSON to false
	I0913 12:05:15.396777    4420 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3878,"bootTime":1726250437,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:05:15.396846    4420 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:05:15.401785    4420 out.go:177] * [multinode-816000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:05:15.408566    4420 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:05:15.408616    4420 notify.go:220] Checking for updates...
	I0913 12:05:15.415696    4420 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:05:15.417126    4420 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:05:15.420696    4420 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:05:15.423720    4420 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:05:15.426766    4420 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:05:15.429955    4420 config.go:182] Loaded profile config "multinode-816000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:05:15.430252    4420 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:05:15.434622    4420 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 12:05:15.441678    4420 start.go:297] selected driver: qemu2
	I0913 12:05:15.441683    4420 start.go:901] validating driver "qemu2" against &{Name:multinode-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-816000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:05:15.441724    4420 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:05:15.444228    4420 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:05:15.444254    4420 cni.go:84] Creating CNI manager for ""
	I0913 12:05:15.444273    4420 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0913 12:05:15.444310    4420 start.go:340] cluster config:
	{Name:multinode-816000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-816000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:05:15.447783    4420 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:05:15.454610    4420 out.go:177] * Starting "multinode-816000" primary control-plane node in "multinode-816000" cluster
	I0913 12:05:15.458715    4420 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:05:15.458731    4420 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:05:15.458739    4420 cache.go:56] Caching tarball of preloaded images
	I0913 12:05:15.458803    4420 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:05:15.458810    4420 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:05:15.458890    4420 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/multinode-816000/config.json ...
	I0913 12:05:15.459348    4420 start.go:360] acquireMachinesLock for multinode-816000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:05:15.459374    4420 start.go:364] duration metric: took 21µs to acquireMachinesLock for "multinode-816000"
	I0913 12:05:15.459383    4420 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:05:15.459388    4420 fix.go:54] fixHost starting: 
	I0913 12:05:15.459502    4420 fix.go:112] recreateIfNeeded on multinode-816000: state=Stopped err=<nil>
	W0913 12:05:15.459510    4420 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:05:15.466639    4420 out.go:177] * Restarting existing qemu2 VM for "multinode-816000" ...
	I0913 12:05:15.470736    4420 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:05:15.470780    4420 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:21:a5:f9:c6:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2
	I0913 12:05:15.472705    4420 main.go:141] libmachine: STDOUT: 
	I0913 12:05:15.472724    4420 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:05:15.472751    4420 fix.go:56] duration metric: took 13.361875ms for fixHost
	I0913 12:05:15.472755    4420 start.go:83] releasing machines lock for "multinode-816000", held for 13.377084ms
	W0913 12:05:15.472760    4420 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:05:15.472795    4420 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:05:15.472799    4420 start.go:729] Will try again in 5 seconds ...
	I0913 12:05:20.474707    4420 start.go:360] acquireMachinesLock for multinode-816000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:05:20.475049    4420 start.go:364] duration metric: took 279.959µs to acquireMachinesLock for "multinode-816000"
	I0913 12:05:20.475166    4420 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:05:20.475183    4420 fix.go:54] fixHost starting: 
	I0913 12:05:20.475907    4420 fix.go:112] recreateIfNeeded on multinode-816000: state=Stopped err=<nil>
	W0913 12:05:20.475933    4420 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:05:20.485300    4420 out.go:177] * Restarting existing qemu2 VM for "multinode-816000" ...
	I0913 12:05:20.489291    4420 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:05:20.489474    4420 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:21:a5:f9:c6:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/multinode-816000/disk.qcow2
	I0913 12:05:20.498353    4420 main.go:141] libmachine: STDOUT: 
	I0913 12:05:20.498452    4420 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:05:20.498556    4420 fix.go:56] duration metric: took 23.373666ms for fixHost
	I0913 12:05:20.498575    4420 start.go:83] releasing machines lock for "multinode-816000", held for 23.504833ms
	W0913 12:05:20.498780    4420 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-816000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-816000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:05:20.506338    4420 out.go:201] 
	W0913 12:05:20.510364    4420 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:05:20.510394    4420 out.go:270] * 
	* 
	W0913 12:05:20.512966    4420 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:05:20.520306    4420 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-816000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000: exit status 7 (68.65575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-816000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-816000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-816000-m01 --driver=qemu2 : exit status 80 (9.900458958s)

                                                
                                                
-- stdout --
	* [multinode-816000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-816000-m01" primary control-plane node in "multinode-816000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-816000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-816000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-816000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-816000-m02 --driver=qemu2 : exit status 80 (10.088926s)

                                                
                                                
-- stdout --
	* [multinode-816000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-816000-m02" primary control-plane node in "multinode-816000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-816000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-816000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-816000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-816000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-816000: exit status 83 (81.42975ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-816000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-816000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-816000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-816000 -n multinode-816000: exit status 7 (31.07ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-816000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.22s)

                                                
                                    
x
+
TestPreload (10.08s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-228000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-228000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.932442125s)

                                                
                                                
-- stdout --
	* [test-preload-228000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-228000" primary control-plane node in "test-preload-228000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-228000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:05:40.963665    4472 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:05:40.963793    4472 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:05:40.963796    4472 out.go:358] Setting ErrFile to fd 2...
	I0913 12:05:40.963798    4472 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:05:40.963925    4472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:05:40.964950    4472 out.go:352] Setting JSON to false
	I0913 12:05:40.981390    4472 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3903,"bootTime":1726250437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:05:40.981458    4472 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:05:40.986559    4472 out.go:177] * [test-preload-228000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:05:40.995435    4472 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:05:40.995481    4472 notify.go:220] Checking for updates...
	I0913 12:05:41.002357    4472 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:05:41.005390    4472 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:05:41.007032    4472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:05:41.010369    4472 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:05:41.013414    4472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:05:41.016718    4472 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:05:41.016772    4472 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:05:41.020285    4472 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:05:41.027473    4472 start.go:297] selected driver: qemu2
	I0913 12:05:41.027481    4472 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:05:41.027488    4472 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:05:41.029797    4472 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:05:41.032408    4472 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:05:41.035474    4472 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:05:41.035491    4472 cni.go:84] Creating CNI manager for ""
	I0913 12:05:41.035522    4472 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:05:41.035527    4472 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 12:05:41.035569    4472 start.go:340] cluster config:
	{Name:test-preload-228000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-228000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:05:41.039329    4472 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:05:41.047357    4472 out.go:177] * Starting "test-preload-228000" primary control-plane node in "test-preload-228000" cluster
	I0913 12:05:41.051457    4472 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0913 12:05:41.051568    4472 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/test-preload-228000/config.json ...
	I0913 12:05:41.051584    4472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/test-preload-228000/config.json: {Name:mk2841690892412f1ae2bd228a913b134ca41906 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:05:41.051578    4472 cache.go:107] acquiring lock: {Name:mk61722fb0d0f2e875f45a7556480db3cfc82c42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:05:41.051584    4472 cache.go:107] acquiring lock: {Name:mk17f6d43c7206131d95df7c16bbacbac9092ee8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:05:41.051588    4472 cache.go:107] acquiring lock: {Name:mk40d1c916768e46fb02d4c6a7a45dcbc954cfc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:05:41.051617    4472 cache.go:107] acquiring lock: {Name:mk67adf51e3dd46174ab65f152f5fac9b9328ceb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:05:41.051628    4472 cache.go:107] acquiring lock: {Name:mk04a5d7ca4773e4d7105659301cd4a9b2d32093 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:05:41.051608    4472 cache.go:107] acquiring lock: {Name:mk7e987747dbde652898a5c8c25e7e2431ca24bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:05:41.051718    4472 cache.go:107] acquiring lock: {Name:mk7cc455bc79aa345fc74d478291a7aed0ca8257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:05:41.052244    4472 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:05:41.052243    4472 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0913 12:05:41.052265    4472 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0913 12:05:41.052289    4472 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0913 12:05:41.052284    4472 cache.go:107] acquiring lock: {Name:mk1761ab897b40c1310590fa825ee5c8c6581601 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:05:41.052321    4472 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0913 12:05:41.052369    4472 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0913 12:05:41.052389    4472 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0913 12:05:41.052470    4472 start.go:360] acquireMachinesLock for test-preload-228000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:05:41.052490    4472 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:05:41.052509    4472 start.go:364] duration metric: took 33µs to acquireMachinesLock for "test-preload-228000"
	I0913 12:05:41.052520    4472 start.go:93] Provisioning new machine with config: &{Name:test-preload-228000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-228000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:05:41.052556    4472 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:05:41.060409    4472 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:05:41.064642    4472 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0913 12:05:41.064670    4472 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0913 12:05:41.064767    4472 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0913 12:05:41.066911    4472 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0913 12:05:41.066961    4472 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:05:41.066956    4472 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:05:41.067001    4472 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0913 12:05:41.067240    4472 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0913 12:05:41.078659    4472 start.go:159] libmachine.API.Create for "test-preload-228000" (driver="qemu2")
	I0913 12:05:41.078682    4472 client.go:168] LocalClient.Create starting
	I0913 12:05:41.078763    4472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:05:41.078797    4472 main.go:141] libmachine: Decoding PEM data...
	I0913 12:05:41.078807    4472 main.go:141] libmachine: Parsing certificate...
	I0913 12:05:41.078843    4472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:05:41.078866    4472 main.go:141] libmachine: Decoding PEM data...
	I0913 12:05:41.078878    4472 main.go:141] libmachine: Parsing certificate...
	I0913 12:05:41.079243    4472 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:05:41.236166    4472 main.go:141] libmachine: Creating SSH key...
	I0913 12:05:41.339685    4472 main.go:141] libmachine: Creating Disk image...
	I0913 12:05:41.339729    4472 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:05:41.339963    4472 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/disk.qcow2
	I0913 12:05:41.349866    4472 main.go:141] libmachine: STDOUT: 
	I0913 12:05:41.349896    4472 main.go:141] libmachine: STDERR: 
	I0913 12:05:41.349958    4472 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/disk.qcow2 +20000M
	I0913 12:05:41.359011    4472 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:05:41.359029    4472 main.go:141] libmachine: STDERR: 
	I0913 12:05:41.359047    4472 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/disk.qcow2
	I0913 12:05:41.359058    4472 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:05:41.359069    4472 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:05:41.359096    4472 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:a4:5c:5e:67:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/disk.qcow2
	I0913 12:05:41.361065    4472 main.go:141] libmachine: STDOUT: 
	I0913 12:05:41.361081    4472 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:05:41.361100    4472 client.go:171] duration metric: took 282.424458ms to LocalClient.Create
	I0913 12:05:41.622261    4472 cache.go:162] opening:  /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0913 12:05:41.635306    4472 cache.go:162] opening:  /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0913 12:05:41.652519    4472 cache.go:162] opening:  /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0913 12:05:41.675711    4472 cache.go:162] opening:  /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0913 12:05:41.692129    4472 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0913 12:05:41.692156    4472 cache.go:162] opening:  /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0913 12:05:41.715628    4472 cache.go:162] opening:  /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0913 12:05:41.780787    4472 cache.go:157] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0913 12:05:41.780821    4472 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 729.218584ms
	I0913 12:05:41.780847    4472 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0913 12:05:41.797333    4472 cache.go:162] opening:  /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0913 12:05:42.192142    4472 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0913 12:05:42.192260    4472 cache.go:162] opening:  /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0913 12:05:42.706873    4472 cache.go:157] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0913 12:05:42.706951    4472 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.655431416s
	I0913 12:05:42.706983    4472 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0913 12:05:43.361358    4472 start.go:128] duration metric: took 2.30886925s to createHost
	I0913 12:05:43.361406    4472 start.go:83] releasing machines lock for "test-preload-228000", held for 2.308978333s
	W0913 12:05:43.361458    4472 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:05:43.372530    4472 out.go:177] * Deleting "test-preload-228000" in qemu2 ...
	W0913 12:05:43.403828    4472 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:05:43.403856    4472 start.go:729] Will try again in 5 seconds ...
	I0913 12:05:44.358741    4472 cache.go:157] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0913 12:05:44.358799    4472 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.307317417s
	I0913 12:05:44.358823    4472 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0913 12:05:44.825116    4472 cache.go:157] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0913 12:05:44.825165    4472 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.773025958s
	I0913 12:05:44.825210    4472 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0913 12:05:45.877122    4472 cache.go:157] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0913 12:05:45.877167    4472 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.825774833s
	I0913 12:05:45.877201    4472 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0913 12:05:46.084169    4472 cache.go:157] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0913 12:05:46.084236    4472 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.032847042s
	I0913 12:05:46.084271    4472 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0913 12:05:46.552827    4472 cache.go:157] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0913 12:05:46.552875    4472 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.501470833s
	I0913 12:05:46.552899    4472 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0913 12:05:48.404070    4472 start.go:360] acquireMachinesLock for test-preload-228000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:05:48.404540    4472 start.go:364] duration metric: took 395.5µs to acquireMachinesLock for "test-preload-228000"
	I0913 12:05:48.404654    4472 start.go:93] Provisioning new machine with config: &{Name:test-preload-228000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-228000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:05:48.404906    4472 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:05:48.410544    4472 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:05:48.460313    4472 start.go:159] libmachine.API.Create for "test-preload-228000" (driver="qemu2")
	I0913 12:05:48.460362    4472 client.go:168] LocalClient.Create starting
	I0913 12:05:48.460480    4472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:05:48.460558    4472 main.go:141] libmachine: Decoding PEM data...
	I0913 12:05:48.460581    4472 main.go:141] libmachine: Parsing certificate...
	I0913 12:05:48.460655    4472 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:05:48.460701    4472 main.go:141] libmachine: Decoding PEM data...
	I0913 12:05:48.460728    4472 main.go:141] libmachine: Parsing certificate...
	I0913 12:05:48.461263    4472 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:05:48.626196    4472 main.go:141] libmachine: Creating SSH key...
	I0913 12:05:48.794907    4472 main.go:141] libmachine: Creating Disk image...
	I0913 12:05:48.794914    4472 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:05:48.795154    4472 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/disk.qcow2
	I0913 12:05:48.804596    4472 main.go:141] libmachine: STDOUT: 
	I0913 12:05:48.804621    4472 main.go:141] libmachine: STDERR: 
	I0913 12:05:48.804681    4472 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/disk.qcow2 +20000M
	I0913 12:05:48.812920    4472 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:05:48.812938    4472 main.go:141] libmachine: STDERR: 
	I0913 12:05:48.812956    4472 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/disk.qcow2
	I0913 12:05:48.812960    4472 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:05:48.812976    4472 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:05:48.813015    4472 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:59:7c:28:e1:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/test-preload-228000/disk.qcow2
	I0913 12:05:48.814757    4472 main.go:141] libmachine: STDOUT: 
	I0913 12:05:48.814781    4472 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:05:48.814795    4472 client.go:171] duration metric: took 354.44075ms to LocalClient.Create
	I0913 12:05:50.042936    4472 cache.go:157] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0913 12:05:50.042993    4472 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.991652916s
	I0913 12:05:50.043024    4472 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0913 12:05:50.043063    4472 cache.go:87] Successfully saved all images to host disk.
	I0913 12:05:50.816941    4472 start.go:128] duration metric: took 2.412065292s to createHost
	I0913 12:05:50.817019    4472 start.go:83] releasing machines lock for "test-preload-228000", held for 2.412550834s
	W0913 12:05:50.817372    4472 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-228000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-228000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:05:50.835042    4472 out.go:201] 
	W0913 12:05:50.838939    4472 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:05:50.838964    4472 out.go:270] * 
	* 
	W0913 12:05:50.841560    4472 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:05:50.852953    4472 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-228000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-13 12:05:50.869713 -0700 PDT m=+2759.330566793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-228000 -n test-preload-228000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-228000 -n test-preload-228000: exit status 7 (65.796625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-228000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-228000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-228000
--- FAIL: TestPreload (10.08s)

                                                
                                    
x
+
TestScheduledStopUnix (10.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-388000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-388000 --memory=2048 --driver=qemu2 : exit status 80 (9.901728958s)

                                                
                                                
-- stdout --
	* [scheduled-stop-388000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-388000" primary control-plane node in "scheduled-stop-388000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-388000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-388000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-388000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-388000" primary control-plane node in "scheduled-stop-388000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-388000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-388000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-13 12:06:00.920408 -0700 PDT m=+2769.381660793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-388000 -n scheduled-stop-388000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-388000 -n scheduled-stop-388000: exit status 7 (69.62725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-388000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-388000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-388000
--- FAIL: TestScheduledStopUnix (10.05s)

                                                
                                    
x
+
TestSkaffold (12.72s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe793518034 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe793518034 version: (1.056201167s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-009000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-009000 --memory=2600 --driver=qemu2 : exit status 80 (9.860083125s)

                                                
                                                
-- stdout --
	* [skaffold-009000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-009000" primary control-plane node in "skaffold-009000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-009000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-009000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-009000" primary control-plane node in "skaffold-009000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-009000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-13 12:06:13.646116 -0700 PDT m=+2782.107872918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-009000 -n skaffold-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-009000 -n skaffold-009000: exit status 7 (63.003875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-009000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-009000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-009000
--- FAIL: TestSkaffold (12.72s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (596.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2787220303 start -p running-upgrade-383000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2787220303 start -p running-upgrade-383000 --memory=2200 --vm-driver=qemu2 : (58.754659375s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-383000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0913 12:08:53.309528    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 12:08:57.006084    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-383000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.328678542s)

                                                
                                                
-- stdout --
	* [running-upgrade-383000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-383000" primary control-plane node in "running-upgrade-383000" cluster
	* Updating the running qemu2 "running-upgrade-383000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:07:56.358675    4860 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:07:56.358819    4860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:07:56.358822    4860 out.go:358] Setting ErrFile to fd 2...
	I0913 12:07:56.358824    4860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:07:56.358950    4860 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:07:56.359892    4860 out.go:352] Setting JSON to false
	I0913 12:07:56.376788    4860 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4039,"bootTime":1726250437,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:07:56.376873    4860 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:07:56.384021    4860 out.go:177] * [running-upgrade-383000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:07:56.392095    4860 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:07:56.392130    4860 notify.go:220] Checking for updates...
	I0913 12:07:56.398027    4860 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:07:56.402052    4860 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:07:56.405094    4860 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:07:56.408096    4860 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:07:56.410834    4860 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:07:56.414280    4860 config.go:182] Loaded profile config "running-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:07:56.417992    4860 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 12:07:56.421066    4860 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:07:56.424017    4860 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 12:07:56.431055    4860 start.go:297] selected driver: qemu2
	I0913 12:07:56.431062    4860 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50300 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 12:07:56.431111    4860 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:07:56.433768    4860 cni.go:84] Creating CNI manager for ""
	I0913 12:07:56.433799    4860 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:07:56.433822    4860 start.go:340] cluster config:
	{Name:running-upgrade-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50300 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 12:07:56.433873    4860 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:07:56.442026    4860 out.go:177] * Starting "running-upgrade-383000" primary control-plane node in "running-upgrade-383000" cluster
	I0913 12:07:56.444981    4860 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0913 12:07:56.445001    4860 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0913 12:07:56.445011    4860 cache.go:56] Caching tarball of preloaded images
	I0913 12:07:56.445079    4860 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:07:56.445084    4860 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0913 12:07:56.445132    4860 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/config.json ...
	I0913 12:07:56.445621    4860 start.go:360] acquireMachinesLock for running-upgrade-383000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:07:56.445647    4860 start.go:364] duration metric: took 20.625µs to acquireMachinesLock for "running-upgrade-383000"
	I0913 12:07:56.445655    4860 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:07:56.445660    4860 fix.go:54] fixHost starting: 
	I0913 12:07:56.446250    4860 fix.go:112] recreateIfNeeded on running-upgrade-383000: state=Running err=<nil>
	W0913 12:07:56.446258    4860 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:07:56.450100    4860 out.go:177] * Updating the running qemu2 "running-upgrade-383000" VM ...
	I0913 12:07:56.456998    4860 machine.go:93] provisionDockerMachine start ...
	I0913 12:07:56.457034    4860 main.go:141] libmachine: Using SSH client type: native
	I0913 12:07:56.457137    4860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ae5190] 0x102ae79d0 <nil>  [] 0s} localhost 50268 <nil> <nil>}
	I0913 12:07:56.457142    4860 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 12:07:56.508798    4860 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-383000
	
	I0913 12:07:56.508812    4860 buildroot.go:166] provisioning hostname "running-upgrade-383000"
	I0913 12:07:56.508866    4860 main.go:141] libmachine: Using SSH client type: native
	I0913 12:07:56.508990    4860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ae5190] 0x102ae79d0 <nil>  [] 0s} localhost 50268 <nil> <nil>}
	I0913 12:07:56.508995    4860 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-383000 && echo "running-upgrade-383000" | sudo tee /etc/hostname
	I0913 12:07:56.563873    4860 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-383000
	
	I0913 12:07:56.563932    4860 main.go:141] libmachine: Using SSH client type: native
	I0913 12:07:56.564058    4860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ae5190] 0x102ae79d0 <nil>  [] 0s} localhost 50268 <nil> <nil>}
	I0913 12:07:56.564067    4860 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-383000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-383000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-383000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 12:07:56.613258    4860 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 12:07:56.613270    4860 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19636-1170/.minikube CaCertPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19636-1170/.minikube}
	I0913 12:07:56.613278    4860 buildroot.go:174] setting up certificates
	I0913 12:07:56.613284    4860 provision.go:84] configureAuth start
	I0913 12:07:56.613290    4860 provision.go:143] copyHostCerts
	I0913 12:07:56.613356    4860 exec_runner.go:144] found /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.pem, removing ...
	I0913 12:07:56.613363    4860 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.pem
	I0913 12:07:56.613541    4860 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.pem (1078 bytes)
	I0913 12:07:56.613731    4860 exec_runner.go:144] found /Users/jenkins/minikube-integration/19636-1170/.minikube/cert.pem, removing ...
	I0913 12:07:56.613736    4860 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19636-1170/.minikube/cert.pem
	I0913 12:07:56.613780    4860 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19636-1170/.minikube/cert.pem (1123 bytes)
	I0913 12:07:56.613882    4860 exec_runner.go:144] found /Users/jenkins/minikube-integration/19636-1170/.minikube/key.pem, removing ...
	I0913 12:07:56.613886    4860 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19636-1170/.minikube/key.pem
	I0913 12:07:56.613926    4860 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19636-1170/.minikube/key.pem (1679 bytes)
	I0913 12:07:56.614011    4860 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-383000 san=[127.0.0.1 localhost minikube running-upgrade-383000]
	I0913 12:07:56.815263    4860 provision.go:177] copyRemoteCerts
	I0913 12:07:56.815321    4860 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 12:07:56.815330    4860 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/running-upgrade-383000/id_rsa Username:docker}
	I0913 12:07:56.842906    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 12:07:56.849628    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0913 12:07:56.856124    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 12:07:56.863187    4860 provision.go:87] duration metric: took 249.902541ms to configureAuth
	I0913 12:07:56.863203    4860 buildroot.go:189] setting minikube options for container-runtime
	I0913 12:07:56.863322    4860 config.go:182] Loaded profile config "running-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:07:56.863359    4860 main.go:141] libmachine: Using SSH client type: native
	I0913 12:07:56.863456    4860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ae5190] 0x102ae79d0 <nil>  [] 0s} localhost 50268 <nil> <nil>}
	I0913 12:07:56.863461    4860 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0913 12:07:56.919262    4860 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0913 12:07:56.919272    4860 buildroot.go:70] root file system type: tmpfs
	I0913 12:07:56.919333    4860 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0913 12:07:56.919388    4860 main.go:141] libmachine: Using SSH client type: native
	I0913 12:07:56.919515    4860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ae5190] 0x102ae79d0 <nil>  [] 0s} localhost 50268 <nil> <nil>}
	I0913 12:07:56.919548    4860 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0913 12:07:56.974159    4860 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0913 12:07:56.974217    4860 main.go:141] libmachine: Using SSH client type: native
	I0913 12:07:56.974326    4860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ae5190] 0x102ae79d0 <nil>  [] 0s} localhost 50268 <nil> <nil>}
	I0913 12:07:56.974337    4860 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0913 12:07:57.025675    4860 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 12:07:57.025686    4860 machine.go:96] duration metric: took 568.70475ms to provisionDockerMachine
	I0913 12:07:57.025691    4860 start.go:293] postStartSetup for "running-upgrade-383000" (driver="qemu2")
	I0913 12:07:57.025697    4860 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 12:07:57.025759    4860 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 12:07:57.025768    4860 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/running-upgrade-383000/id_rsa Username:docker}
	I0913 12:07:57.053604    4860 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 12:07:57.054828    4860 info.go:137] Remote host: Buildroot 2021.02.12
	I0913 12:07:57.054835    4860 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19636-1170/.minikube/addons for local assets ...
	I0913 12:07:57.055118    4860 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19636-1170/.minikube/files for local assets ...
	I0913 12:07:57.055219    4860 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19636-1170/.minikube/files/etc/ssl/certs/16952.pem -> 16952.pem in /etc/ssl/certs
	I0913 12:07:57.055327    4860 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 12:07:57.058390    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/files/etc/ssl/certs/16952.pem --> /etc/ssl/certs/16952.pem (1708 bytes)
	I0913 12:07:57.064990    4860 start.go:296] duration metric: took 39.296ms for postStartSetup
	I0913 12:07:57.065005    4860 fix.go:56] duration metric: took 619.370375ms for fixHost
	I0913 12:07:57.065041    4860 main.go:141] libmachine: Using SSH client type: native
	I0913 12:07:57.065142    4860 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ae5190] 0x102ae79d0 <nil>  [] 0s} localhost 50268 <nil> <nil>}
	I0913 12:07:57.065147    4860 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 12:07:57.115955    4860 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726254476.848015430
	
	I0913 12:07:57.115963    4860 fix.go:216] guest clock: 1726254476.848015430
	I0913 12:07:57.115967    4860 fix.go:229] Guest: 2024-09-13 12:07:56.84801543 -0700 PDT Remote: 2024-09-13 12:07:57.065007 -0700 PDT m=+0.726473584 (delta=-216.99157ms)
	I0913 12:07:57.115977    4860 fix.go:200] guest clock delta is within tolerance: -216.99157ms
	I0913 12:07:57.115980    4860 start.go:83] releasing machines lock for "running-upgrade-383000", held for 670.355667ms
	I0913 12:07:57.116047    4860 ssh_runner.go:195] Run: cat /version.json
	I0913 12:07:57.116058    4860 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/running-upgrade-383000/id_rsa Username:docker}
	I0913 12:07:57.116070    4860 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 12:07:57.116085    4860 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/running-upgrade-383000/id_rsa Username:docker}
	W0913 12:07:57.116612    4860 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50390->127.0.0.1:50268: read: connection reset by peer
	I0913 12:07:57.116631    4860 retry.go:31] will retry after 359.131121ms: ssh: handshake failed: read tcp 127.0.0.1:50390->127.0.0.1:50268: read: connection reset by peer
	W0913 12:07:57.507228    4860 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0913 12:07:57.507315    4860 ssh_runner.go:195] Run: systemctl --version
	I0913 12:07:57.509312    4860 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 12:07:57.511017    4860 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 12:07:57.511053    4860 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0913 12:07:57.514250    4860 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0913 12:07:57.518949    4860 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 12:07:57.518957    4860 start.go:495] detecting cgroup driver to use...
	I0913 12:07:57.519213    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 12:07:57.525321    4860 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0913 12:07:57.528158    4860 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0913 12:07:57.531253    4860 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0913 12:07:57.531298    4860 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0913 12:07:57.534733    4860 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 12:07:57.538233    4860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0913 12:07:57.541179    4860 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 12:07:57.543966    4860 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 12:07:57.549127    4860 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0913 12:07:57.552013    4860 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0913 12:07:57.554867    4860 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0913 12:07:57.558069    4860 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 12:07:57.560821    4860 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 12:07:57.563494    4860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:07:57.638676    4860 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0913 12:07:57.649931    4860 start.go:495] detecting cgroup driver to use...
	I0913 12:07:57.649998    4860 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0913 12:07:57.655650    4860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 12:07:57.660523    4860 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 12:07:57.669695    4860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 12:07:57.674082    4860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 12:07:57.678479    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 12:07:57.683838    4860 ssh_runner.go:195] Run: which cri-dockerd
	I0913 12:07:57.685129    4860 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0913 12:07:57.687631    4860 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0913 12:07:57.692642    4860 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0913 12:07:57.785519    4860 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0913 12:07:57.879078    4860 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0913 12:07:57.879144    4860 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0913 12:07:57.884659    4860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:07:57.977082    4860 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 12:08:00.165640    4860 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.188627208s)
	I0913 12:08:00.165702    4860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0913 12:08:00.170232    4860 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0913 12:08:00.176263    4860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 12:08:00.180714    4860 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0913 12:08:00.272765    4860 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0913 12:08:00.340412    4860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:08:00.402806    4860 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0913 12:08:00.409623    4860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 12:08:00.414041    4860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:08:00.485355    4860 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0913 12:08:00.528608    4860 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0913 12:08:00.528693    4860 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0913 12:08:00.530856    4860 start.go:563] Will wait 60s for crictl version
	I0913 12:08:00.530908    4860 ssh_runner.go:195] Run: which crictl
	I0913 12:08:00.532321    4860 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 12:08:00.543976    4860 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0913 12:08:00.544062    4860 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 12:08:00.556061    4860 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 12:08:00.576647    4860 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0913 12:08:00.576811    4860 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0913 12:08:00.578200    4860 kubeadm.go:883] updating cluster {Name:running-upgrade-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50300 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0913 12:08:00.578248    4860 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0913 12:08:00.578297    4860 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 12:08:00.588168    4860 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 12:08:00.588176    4860 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0913 12:08:00.588223    4860 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 12:08:00.591200    4860 ssh_runner.go:195] Run: which lz4
	I0913 12:08:00.592567    4860 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 12:08:00.593799    4860 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 12:08:00.593807    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0913 12:08:01.550461    4860 docker.go:649] duration metric: took 957.976958ms to copy over tarball
	I0913 12:08:01.550525    4860 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 12:08:02.858257    4860 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.307770917s)
	I0913 12:08:02.858280    4860 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 12:08:02.874465    4860 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 12:08:02.878002    4860 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0913 12:08:02.883071    4860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:08:02.964247    4860 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 12:08:04.158885    4860 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.19466775s)
	I0913 12:08:04.158985    4860 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 12:08:04.172250    4860 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 12:08:04.172261    4860 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0913 12:08:04.172266    4860 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 12:08:04.177209    4860 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 12:08:04.179361    4860 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:08:04.182147    4860 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0913 12:08:04.182470    4860 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 12:08:04.183908    4860 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:08:04.183907    4860 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 12:08:04.184976    4860 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0913 12:08:04.185043    4860 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0913 12:08:04.186362    4860 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:08:04.186387    4860 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 12:08:04.187623    4860 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0913 12:08:04.188022    4860 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 12:08:04.188632    4860 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 12:08:04.188632    4860 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:08:04.189943    4860 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 12:08:04.190456    4860 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 12:08:04.610818    4860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0913 12:08:04.620533    4860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0913 12:08:04.625947    4860 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0913 12:08:04.625970    4860 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 12:08:04.626031    4860 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0913 12:08:04.633395    4860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 12:08:04.643647    4860 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0913 12:08:04.643666    4860 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0913 12:08:04.643671    4860 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0913 12:08:04.643727    4860 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0913 12:08:04.651729    4860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0913 12:08:04.652125    4860 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0913 12:08:04.652141    4860 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 12:08:04.652174    4860 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 12:08:04.655955    4860 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0913 12:08:04.656092    4860 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0913 12:08:04.667721    4860 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0913 12:08:04.667741    4860 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0913 12:08:04.667801    4860 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0913 12:08:04.668403    4860 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0913 12:08:04.668463    4860 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0913 12:08:04.668496    4860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:08:04.668504    4860 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0913 12:08:04.668515    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0913 12:08:04.701729    4860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0913 12:08:04.709023    4860 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0913 12:08:04.709023    4860 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0913 12:08:04.709088    4860 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:08:04.709142    4860 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:08:04.709146    4860 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0913 12:08:04.716696    4860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0913 12:08:04.740762    4860 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0913 12:08:04.740782    4860 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 12:08:04.740850    4860 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0913 12:08:04.753369    4860 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0913 12:08:04.753400    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0913 12:08:04.753417    4860 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0913 12:08:04.753447    4860 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0913 12:08:04.753466    4860 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 12:08:04.753522    4860 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0913 12:08:04.753528    4860 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0913 12:08:04.786275    4860 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0913 12:08:04.797785    4860 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0913 12:08:04.797797    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0913 12:08:04.805899    4860 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0913 12:08:04.805916    4860 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0913 12:08:04.805930    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0913 12:08:04.869488    4860 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0913 12:08:04.902122    4860 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0913 12:08:04.902137    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0913 12:08:05.014311    4860 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0913 12:08:05.014450    4860 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:08:05.049724    4860 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0913 12:08:05.080305    4860 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0913 12:08:05.080323    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0913 12:08:05.083283    4860 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0913 12:08:05.083307    4860 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:08:05.083382    4860 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:08:05.337178    4860 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0913 12:08:05.412023    4860 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0913 12:08:05.412163    4860 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0913 12:08:05.413796    4860 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0913 12:08:05.413817    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0913 12:08:05.492645    4860 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0913 12:08:05.492660    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0913 12:08:05.946145    4860 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0913 12:08:05.946185    4860 cache_images.go:92] duration metric: took 1.773982375s to LoadCachedImages
	W0913 12:08:05.946220    4860 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0913 12:08:05.946229    4860 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0913 12:08:05.946278    4860 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-383000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 12:08:05.946356    4860 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0913 12:08:05.977157    4860 cni.go:84] Creating CNI manager for ""
	I0913 12:08:05.977168    4860 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:08:05.977176    4860 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 12:08:05.977184    4860 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-383000 NodeName:running-upgrade-383000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 12:08:05.977239    4860 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-383000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 12:08:05.977299    4860 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0913 12:08:05.980390    4860 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 12:08:05.980431    4860 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 12:08:05.984006    4860 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0913 12:08:06.001599    4860 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 12:08:06.017238    4860 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0913 12:08:06.036303    4860 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0913 12:08:06.037648    4860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:08:06.164344    4860 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 12:08:06.171361    4860 certs.go:68] Setting up /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000 for IP: 10.0.2.15
	I0913 12:08:06.171369    4860 certs.go:194] generating shared ca certs ...
	I0913 12:08:06.171378    4860 certs.go:226] acquiring lock for ca certs: {Name:mka395184640c64d3892ae138bcca34b27eb400d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:08:06.171534    4860 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.key
	I0913 12:08:06.171568    4860 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.key
	I0913 12:08:06.171575    4860 certs.go:256] generating profile certs ...
	I0913 12:08:06.171641    4860 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/client.key
	I0913 12:08:06.171662    4860 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/apiserver.key.ef5053c2
	I0913 12:08:06.171673    4860 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/apiserver.crt.ef5053c2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0913 12:08:06.275512    4860 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/apiserver.crt.ef5053c2 ...
	I0913 12:08:06.275526    4860 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/apiserver.crt.ef5053c2: {Name:mkcb95a2e2072da01987d9aa8f745424b7175efe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:08:06.276172    4860 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/apiserver.key.ef5053c2 ...
	I0913 12:08:06.276180    4860 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/apiserver.key.ef5053c2: {Name:mk858bc3691a6ae3ec84710fdd0104d404bad43c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:08:06.276332    4860 certs.go:381] copying /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/apiserver.crt.ef5053c2 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/apiserver.crt
	I0913 12:08:06.276489    4860 certs.go:385] copying /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/apiserver.key.ef5053c2 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/apiserver.key
	I0913 12:08:06.276614    4860 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/proxy-client.key
	I0913 12:08:06.276759    4860 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/1695.pem (1338 bytes)
	W0913 12:08:06.276781    4860 certs.go:480] ignoring /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/1695_empty.pem, impossibly tiny 0 bytes
	I0913 12:08:06.276788    4860 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 12:08:06.276807    4860 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem (1078 bytes)
	I0913 12:08:06.276824    4860 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem (1123 bytes)
	I0913 12:08:06.276840    4860 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/key.pem (1679 bytes)
	I0913 12:08:06.276877    4860 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/files/etc/ssl/certs/16952.pem (1708 bytes)
	I0913 12:08:06.277223    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 12:08:06.295048    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 12:08:06.317660    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 12:08:06.329151    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 12:08:06.345332    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 12:08:06.359062    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 12:08:06.376585    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 12:08:06.392044    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 12:08:06.405657    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 12:08:06.415407    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/1695.pem --> /usr/share/ca-certificates/1695.pem (1338 bytes)
	I0913 12:08:06.427405    4860 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/files/etc/ssl/certs/16952.pem --> /usr/share/ca-certificates/16952.pem (1708 bytes)
	I0913 12:08:06.440586    4860 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 12:08:06.448146    4860 ssh_runner.go:195] Run: openssl version
	I0913 12:08:06.453493    4860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 12:08:06.456268    4860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 12:08:06.457911    4860 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:21 /usr/share/ca-certificates/minikubeCA.pem
	I0913 12:08:06.457999    4860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 12:08:06.460564    4860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 12:08:06.470769    4860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1695.pem && ln -fs /usr/share/ca-certificates/1695.pem /etc/ssl/certs/1695.pem"
	I0913 12:08:06.473654    4860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1695.pem
	I0913 12:08:06.475129    4860 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:36 /usr/share/ca-certificates/1695.pem
	I0913 12:08:06.475152    4860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1695.pem
	I0913 12:08:06.477079    4860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1695.pem /etc/ssl/certs/51391683.0"
	I0913 12:08:06.479588    4860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16952.pem && ln -fs /usr/share/ca-certificates/16952.pem /etc/ssl/certs/16952.pem"
	I0913 12:08:06.482517    4860 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16952.pem
	I0913 12:08:06.483925    4860 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:36 /usr/share/ca-certificates/16952.pem
	I0913 12:08:06.483947    4860 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16952.pem
	I0913 12:08:06.485840    4860 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16952.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 12:08:06.489124    4860 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 12:08:06.490591    4860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 12:08:06.492500    4860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 12:08:06.494295    4860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 12:08:06.496109    4860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 12:08:06.497979    4860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 12:08:06.499746    4860 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 12:08:06.501551    4860 kubeadm.go:392] StartCluster: {Name:running-upgrade-383000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50300 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-383000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 12:08:06.501623    4860 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 12:08:06.513647    4860 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 12:08:06.516807    4860 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 12:08:06.516819    4860 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 12:08:06.516845    4860 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 12:08:06.519450    4860 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 12:08:06.519696    4860 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-383000" does not appear in /Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:08:06.519743    4860 kubeconfig.go:62] /Users/jenkins/minikube-integration/19636-1170/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-383000" cluster setting kubeconfig missing "running-upgrade-383000" context setting]
	I0913 12:08:06.519878    4860 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/kubeconfig: {Name:mk70034871f305cb9ef95a7630262c04e6c4f7f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:08:06.521149    4860 kapi.go:59] client config for running-upgrade-383000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/client.key", CAFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040bd540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 12:08:06.521475    4860 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 12:08:06.524677    4860 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-383000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0913 12:08:06.524684    4860 kubeadm.go:1160] stopping kube-system containers ...
	I0913 12:08:06.524733    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 12:08:06.538840    4860 docker.go:483] Stopping containers: [c2c2a4ed7713 cfbb35f2a5e2 c0a704046504 9f7f4433c63e 2fdcb7b4ac1a 79c82acd1261 a6064d9902c5 b161fe54afc5 14cbc8be6ef7 ea71581b2be4 e3706cf3c8b5 30534831cf6f f2f95fb86dd9 f8dec5cf9b83 15792f2ed106 1dacb95131b3 3abaff2d415b 7bcbb860aee5 1e90c4cf59ba d37a0728bb6d]
	I0913 12:08:06.538935    4860 ssh_runner.go:195] Run: docker stop c2c2a4ed7713 cfbb35f2a5e2 c0a704046504 9f7f4433c63e 2fdcb7b4ac1a 79c82acd1261 a6064d9902c5 b161fe54afc5 14cbc8be6ef7 ea71581b2be4 e3706cf3c8b5 30534831cf6f f2f95fb86dd9 f8dec5cf9b83 15792f2ed106 1dacb95131b3 3abaff2d415b 7bcbb860aee5 1e90c4cf59ba d37a0728bb6d
	I0913 12:08:07.121358    4860 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 12:08:07.208534    4860 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 12:08:07.212651    4860 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 13 19:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Sep 13 19:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 13 19:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 13 19:07 /etc/kubernetes/scheduler.conf
	
	I0913 12:08:07.212710    4860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/admin.conf
	I0913 12:08:07.219644    4860 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0913 12:08:07.219694    4860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 12:08:07.227173    4860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/kubelet.conf
	I0913 12:08:07.231111    4860 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0913 12:08:07.231166    4860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 12:08:07.234176    4860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/controller-manager.conf
	I0913 12:08:07.236899    4860 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0913 12:08:07.236932    4860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 12:08:07.239537    4860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/scheduler.conf
	I0913 12:08:07.249884    4860 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0913 12:08:07.249938    4860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 12:08:07.253271    4860 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 12:08:07.256215    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 12:08:07.305335    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 12:08:07.796566    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 12:08:07.997133    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 12:08:08.021025    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 12:08:08.041443    4860 api_server.go:52] waiting for apiserver process to appear ...
	I0913 12:08:08.041530    4860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 12:08:08.543928    4860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 12:08:09.043560    4860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 12:08:09.047789    4860 api_server.go:72] duration metric: took 1.0063875s to wait for apiserver process to appear ...
	I0913 12:08:09.047798    4860 api_server.go:88] waiting for apiserver healthz status ...
	I0913 12:08:09.047807    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:08:14.049701    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:08:14.049805    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:08:19.050530    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:08:19.050624    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:08:24.051510    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:08:24.051552    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:08:29.052380    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:08:29.052486    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:08:34.054089    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:08:34.054188    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:08:39.056230    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:08:39.056339    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:08:44.058954    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:08:44.059057    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:08:49.061590    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:08:49.061695    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:08:54.064360    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:08:54.064462    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:08:59.067076    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:08:59.067170    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:09:04.069056    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:09:04.069158    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:09:09.071620    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:09:09.071872    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:09:09.089787    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:09:09.089889    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:09:09.102926    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:09:09.103021    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:09:09.114832    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:09:09.114903    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:09:09.125371    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:09:09.125454    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:09:09.135941    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:09:09.136028    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:09:09.146510    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:09:09.146604    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:09:09.156793    4860 logs.go:276] 0 containers: []
	W0913 12:09:09.156804    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:09:09.156871    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:09:09.166762    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:09:09.166778    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:09:09.166782    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:09:09.171598    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:09:09.171607    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:09:09.182922    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:09:09.182932    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:09:09.194013    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:09:09.194027    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:09:09.218042    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:09:09.218051    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:09:09.229793    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:09:09.229804    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:09:09.269711    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:09:09.269721    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:09:09.283694    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:09:09.283703    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:09:09.295515    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:09:09.295526    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:09:09.309191    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:09:09.309202    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:09:09.320263    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:09:09.320273    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:09:09.337639    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:09:09.337649    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:09:09.352248    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:09:09.352259    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:09:09.365920    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:09:09.365928    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:09:09.377510    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:09:09.377520    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:09:09.388776    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:09:09.388789    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:09:09.399931    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:09:09.399940    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:09:11.973190    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:09:16.975923    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:09:16.976481    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:09:17.021201    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:09:17.021356    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:09:17.041663    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:09:17.041774    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:09:17.056049    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:09:17.056137    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:09:17.068213    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:09:17.068301    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:09:17.079312    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:09:17.079387    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:09:17.091212    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:09:17.091277    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:09:17.101683    4860 logs.go:276] 0 containers: []
	W0913 12:09:17.101695    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:09:17.101761    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:09:17.111997    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:09:17.112013    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:09:17.112018    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:09:17.123276    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:09:17.123286    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:09:17.135006    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:09:17.135015    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:09:17.147328    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:09:17.147338    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:09:17.163149    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:09:17.163164    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:09:17.174678    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:09:17.174690    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:09:17.191526    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:09:17.191535    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:09:17.202793    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:09:17.202805    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:09:17.207601    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:09:17.207613    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:09:17.221496    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:09:17.221504    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:09:17.232574    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:09:17.232586    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:09:17.270254    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:09:17.270262    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:09:17.287833    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:09:17.287844    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:09:17.301437    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:09:17.301446    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:09:17.312760    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:09:17.312772    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:09:17.324429    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:09:17.324441    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:09:17.350227    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:09:17.350235    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:09:19.889762    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:09:24.891983    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:09:24.892538    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:09:24.932911    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:09:24.933082    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:09:24.955368    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:09:24.955470    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:09:24.970663    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:09:24.970746    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:09:24.983714    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:09:24.983794    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:09:24.995099    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:09:24.995183    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:09:25.005792    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:09:25.005879    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:09:25.015693    4860 logs.go:276] 0 containers: []
	W0913 12:09:25.015706    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:09:25.015768    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:09:25.033754    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:09:25.033774    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:09:25.033779    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:09:25.049509    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:09:25.049519    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:09:25.061165    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:09:25.061176    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:09:25.075258    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:09:25.075269    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:09:25.115310    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:09:25.115321    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:09:25.119996    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:09:25.120002    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:09:25.133129    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:09:25.133140    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:09:25.144791    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:09:25.144801    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:09:25.156240    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:09:25.156253    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:09:25.190436    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:09:25.190448    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:09:25.202803    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:09:25.202813    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:09:25.214690    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:09:25.214700    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:09:25.226602    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:09:25.226610    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:09:25.250961    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:09:25.250970    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:09:25.264792    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:09:25.264805    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:09:25.275674    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:09:25.275685    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:09:25.287542    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:09:25.287554    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:09:27.807065    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:09:32.809739    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:09:32.810237    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:09:32.847306    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:09:32.847458    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:09:32.868994    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:09:32.869131    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:09:32.883433    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:09:32.883510    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:09:32.896478    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:09:32.896576    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:09:32.907332    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:09:32.907412    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:09:32.918625    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:09:32.918711    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:09:32.929137    4860 logs.go:276] 0 containers: []
	W0913 12:09:32.929149    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:09:32.929222    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:09:32.939631    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:09:32.939647    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:09:32.939652    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:09:32.974260    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:09:32.974273    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:09:32.986632    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:09:32.986643    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:09:32.998400    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:09:32.998413    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:09:33.016007    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:09:33.016018    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:09:33.027318    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:09:33.027326    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:09:33.051513    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:09:33.051520    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:09:33.089262    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:09:33.089272    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:09:33.103087    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:09:33.103095    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:09:33.115010    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:09:33.115019    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:09:33.119686    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:09:33.119693    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:09:33.133493    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:09:33.133508    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:09:33.155716    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:09:33.155731    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:09:33.170547    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:09:33.170560    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:09:33.181843    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:09:33.181855    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:09:33.192684    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:09:33.192695    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:09:33.203960    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:09:33.203973    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:09:35.717924    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:09:40.720632    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:09:40.721261    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:09:40.761735    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:09:40.761902    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:09:40.783449    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:09:40.783592    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:09:40.798578    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:09:40.798661    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:09:40.811180    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:09:40.811269    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:09:40.822480    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:09:40.822563    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:09:40.833138    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:09:40.833215    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:09:40.844335    4860 logs.go:276] 0 containers: []
	W0913 12:09:40.844345    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:09:40.844414    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:09:40.855101    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:09:40.855117    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:09:40.855121    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:09:40.881010    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:09:40.881017    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:09:40.920767    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:09:40.920774    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:09:40.932516    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:09:40.932527    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:09:40.943789    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:09:40.943800    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:09:40.954950    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:09:40.954963    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:09:40.959429    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:09:40.959435    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:09:40.970745    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:09:40.970757    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:09:40.981892    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:09:40.981903    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:09:40.993386    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:09:40.993398    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:09:41.034392    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:09:41.034400    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:09:41.048157    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:09:41.048170    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:09:41.059050    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:09:41.059060    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:09:41.079447    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:09:41.079457    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:09:41.093239    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:09:41.093250    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:09:41.106976    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:09:41.106986    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:09:41.118147    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:09:41.118156    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:09:43.631497    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:09:48.634143    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:09:48.634713    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:09:48.673878    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:09:48.674038    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:09:48.695386    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:09:48.695509    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:09:48.710594    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:09:48.710678    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:09:48.722992    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:09:48.723076    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:09:48.733797    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:09:48.733868    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:09:48.744846    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:09:48.744931    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:09:48.755019    4860 logs.go:276] 0 containers: []
	W0913 12:09:48.755031    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:09:48.755094    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:09:48.768154    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:09:48.768175    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:09:48.768180    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:09:48.780514    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:09:48.780527    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:09:48.820279    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:09:48.820286    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:09:48.834567    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:09:48.834579    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:09:48.847880    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:09:48.847891    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:09:48.859646    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:09:48.859655    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:09:48.875980    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:09:48.875991    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:09:48.899759    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:09:48.899768    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:09:48.912103    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:09:48.912113    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:09:48.916225    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:09:48.916233    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:09:48.952407    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:09:48.952418    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:09:48.963704    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:09:48.963713    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:09:48.974969    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:09:48.974979    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:09:48.986313    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:09:48.986327    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:09:48.999790    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:09:48.999801    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:09:49.011991    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:09:49.011999    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:09:49.025266    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:09:49.025276    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:09:51.539007    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:09:56.541443    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:09:56.541661    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:09:56.564475    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:09:56.564607    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:09:56.581605    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:09:56.581696    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:09:56.600578    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:09:56.600658    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:09:56.611053    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:09:56.611123    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:09:56.621585    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:09:56.621660    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:09:56.631888    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:09:56.631953    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:09:56.641921    4860 logs.go:276] 0 containers: []
	W0913 12:09:56.641932    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:09:56.642006    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:09:56.652472    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:09:56.652490    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:09:56.652496    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:09:56.672243    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:09:56.672252    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:09:56.683236    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:09:56.683246    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:09:56.699160    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:09:56.699169    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:09:56.711073    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:09:56.711082    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:09:56.735444    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:09:56.735452    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:09:56.752785    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:09:56.752799    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:09:56.793050    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:09:56.793059    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:09:56.797308    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:09:56.797317    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:09:56.832529    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:09:56.832538    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:09:56.846477    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:09:56.846488    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:09:56.861269    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:09:56.861278    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:09:56.872479    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:09:56.872490    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:09:56.883785    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:09:56.883796    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:09:56.897809    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:09:56.897820    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:09:56.909956    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:09:56.909967    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:09:56.923186    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:09:56.923194    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:09:59.443912    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:10:04.444703    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:10:04.445241    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:10:04.488907    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:10:04.489066    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:10:04.514587    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:10:04.514689    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:10:04.532764    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:10:04.532846    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:10:04.543808    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:10:04.543877    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:10:04.554857    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:10:04.554941    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:10:04.565957    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:10:04.566037    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:10:04.576846    4860 logs.go:276] 0 containers: []
	W0913 12:10:04.576859    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:10:04.576931    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:10:04.587270    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:10:04.587289    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:10:04.587294    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:10:04.598678    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:10:04.598689    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:10:04.603630    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:10:04.603639    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:10:04.615726    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:10:04.615739    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:10:04.628161    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:10:04.628169    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:10:04.640371    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:10:04.640380    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:10:04.651759    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:10:04.651767    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:10:04.663604    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:10:04.663618    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:10:04.703418    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:10:04.703428    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:10:04.715220    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:10:04.715229    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:10:04.729030    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:10:04.729042    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:10:04.754304    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:10:04.754314    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:10:04.789587    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:10:04.789600    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:10:04.802784    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:10:04.802799    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:10:04.813898    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:10:04.813910    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:10:04.831491    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:10:04.831501    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:10:04.847463    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:10:04.847479    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:10:07.363691    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:10:12.366385    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:10:12.366994    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:10:12.407481    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:10:12.407640    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:10:12.428332    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:10:12.428453    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:10:12.443485    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:10:12.443573    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:10:12.456408    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:10:12.456492    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:10:12.467296    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:10:12.467366    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:10:12.479388    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:10:12.479469    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:10:12.504014    4860 logs.go:276] 0 containers: []
	W0913 12:10:12.504027    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:10:12.504090    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:10:12.515015    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:10:12.515038    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:10:12.515046    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:10:12.531205    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:10:12.531213    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:10:12.542516    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:10:12.542529    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:10:12.554783    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:10:12.554791    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:10:12.580420    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:10:12.580428    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:10:12.615511    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:10:12.615524    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:10:12.629747    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:10:12.629758    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:10:12.646599    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:10:12.646608    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:10:12.657885    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:10:12.657894    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:10:12.669189    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:10:12.669199    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:10:12.680547    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:10:12.680559    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:10:12.697677    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:10:12.697687    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:10:12.709283    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:10:12.709292    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:10:12.716211    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:10:12.716220    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:10:12.730614    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:10:12.730626    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:10:12.742293    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:10:12.742303    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:10:12.782926    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:10:12.782937    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:10:15.297075    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:10:20.299192    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:10:20.299820    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:10:20.338778    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:10:20.338933    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:10:20.359831    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:10:20.359970    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:10:20.375644    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:10:20.375725    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:10:20.388309    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:10:20.388392    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:10:20.399213    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:10:20.399291    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:10:20.410004    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:10:20.410071    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:10:20.420142    4860 logs.go:276] 0 containers: []
	W0913 12:10:20.420156    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:10:20.420225    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:10:20.430335    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:10:20.430355    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:10:20.430360    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:10:20.468453    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:10:20.468461    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:10:20.481768    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:10:20.481778    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:10:20.493287    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:10:20.493297    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:10:20.504743    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:10:20.504752    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:10:20.530285    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:10:20.530295    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:10:20.534864    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:10:20.534869    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:10:20.548899    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:10:20.548911    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:10:20.562652    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:10:20.562662    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:10:20.577546    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:10:20.577561    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:10:20.589876    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:10:20.589886    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:10:20.604055    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:10:20.604065    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:10:20.615854    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:10:20.615866    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:10:20.650736    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:10:20.650746    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:10:20.663262    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:10:20.663272    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:10:20.681342    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:10:20.681353    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:10:20.697770    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:10:20.697782    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:10:23.217504    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:10:28.219772    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:10:28.220339    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:10:28.259531    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:10:28.259692    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:10:28.281922    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:10:28.282061    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:10:28.297872    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:10:28.297968    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:10:28.310196    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:10:28.310272    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:10:28.328939    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:10:28.329026    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:10:28.341572    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:10:28.341654    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:10:28.351604    4860 logs.go:276] 0 containers: []
	W0913 12:10:28.351615    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:10:28.351685    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:10:28.362346    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:10:28.362365    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:10:28.362370    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:10:28.374084    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:10:28.374096    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:10:28.391962    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:10:28.391975    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:10:28.403415    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:10:28.403426    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:10:28.415077    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:10:28.415090    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:10:28.453338    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:10:28.453351    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:10:28.457724    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:10:28.457733    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:10:28.469441    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:10:28.469452    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:10:28.483059    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:10:28.483073    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:10:28.494160    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:10:28.494170    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:10:28.508472    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:10:28.508481    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:10:28.519844    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:10:28.519852    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:10:28.531316    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:10:28.531331    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:10:28.542923    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:10:28.542933    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:10:28.577946    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:10:28.577956    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:10:28.591593    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:10:28.591603    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:10:28.603224    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:10:28.603234    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:10:31.129820    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:10:36.130374    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:10:36.130493    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:10:36.143036    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:10:36.143126    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:10:36.155321    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:10:36.155412    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:10:36.172143    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:10:36.172240    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:10:36.183483    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:10:36.183570    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:10:36.195785    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:10:36.195873    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:10:36.209520    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:10:36.209610    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:10:36.225319    4860 logs.go:276] 0 containers: []
	W0913 12:10:36.225332    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:10:36.225407    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:10:36.237143    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:10:36.237160    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:10:36.237166    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:10:36.252705    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:10:36.252721    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:10:36.265503    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:10:36.265515    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:10:36.278237    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:10:36.278249    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:10:36.297215    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:10:36.297230    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:10:36.323974    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:10:36.323992    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:10:36.365933    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:10:36.365955    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:10:36.371457    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:10:36.371469    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:10:36.388224    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:10:36.388238    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:10:36.405015    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:10:36.405030    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:10:36.424834    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:10:36.424846    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:10:36.438113    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:10:36.438125    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:10:36.479443    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:10:36.479455    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:10:36.495175    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:10:36.495189    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:10:36.510918    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:10:36.510936    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:10:36.524324    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:10:36.524336    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:10:36.542099    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:10:36.542114    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:10:39.057186    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:10:44.059267    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:10:44.059469    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:10:44.070692    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:10:44.070770    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:10:44.081507    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:10:44.081592    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:10:44.101460    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:10:44.101534    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:10:44.113872    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:10:44.113962    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:10:44.124482    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:10:44.124564    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:10:44.135221    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:10:44.135304    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:10:44.145851    4860 logs.go:276] 0 containers: []
	W0913 12:10:44.145862    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:10:44.145930    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:10:44.156225    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:10:44.156241    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:10:44.156246    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:10:44.196025    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:10:44.196039    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:10:44.200319    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:10:44.200325    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:10:44.211778    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:10:44.211788    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:10:44.224012    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:10:44.224022    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:10:44.235364    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:10:44.235376    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:10:44.271151    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:10:44.271166    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:10:44.285246    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:10:44.285257    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:10:44.296981    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:10:44.296991    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:10:44.314150    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:10:44.314161    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:10:44.338242    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:10:44.338255    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:10:44.351105    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:10:44.351117    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:10:44.365471    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:10:44.365481    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:10:44.379296    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:10:44.379307    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:10:44.391312    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:10:44.391325    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:10:44.403423    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:10:44.403438    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:10:44.414771    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:10:44.414782    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:10:46.940657    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:10:51.942911    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:10:51.943045    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:10:51.956548    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:10:51.956633    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:10:51.967284    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:10:51.967373    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:10:51.977832    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:10:51.977924    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:10:51.989009    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:10:51.989102    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:10:51.999845    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:10:51.999930    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:10:52.010605    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:10:52.010692    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:10:52.020973    4860 logs.go:276] 0 containers: []
	W0913 12:10:52.020987    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:10:52.021062    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:10:52.032901    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:10:52.032919    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:10:52.032924    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:10:52.044412    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:10:52.044423    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:10:52.062486    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:10:52.062498    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:10:52.074311    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:10:52.074322    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:10:52.088947    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:10:52.088957    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:10:52.101196    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:10:52.101209    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:10:52.117624    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:10:52.117636    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:10:52.122388    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:10:52.122395    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:10:52.159520    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:10:52.159533    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:10:52.171465    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:10:52.171479    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:10:52.185099    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:10:52.185113    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:10:52.197208    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:10:52.197218    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:10:52.220376    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:10:52.220383    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:10:52.257905    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:10:52.257915    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:10:52.271652    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:10:52.271662    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:10:52.283120    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:10:52.283131    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:10:52.294460    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:10:52.294474    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:10:54.810629    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:10:59.813183    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:10:59.813720    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:10:59.852562    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:10:59.852716    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:10:59.879795    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:10:59.879888    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:10:59.892571    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:10:59.892650    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:10:59.908725    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:10:59.908809    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:10:59.919693    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:10:59.919763    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:10:59.930723    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:10:59.930807    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:10:59.946261    4860 logs.go:276] 0 containers: []
	W0913 12:10:59.946273    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:10:59.946338    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:10:59.957662    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:10:59.957683    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:10:59.957690    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:10:59.969792    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:10:59.969803    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:10:59.981081    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:10:59.981094    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:11:00.002831    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:11:00.002842    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:11:00.014465    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:11:00.014475    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:11:00.026022    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:11:00.026033    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:11:00.037599    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:11:00.037615    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:11:00.056194    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:11:00.056205    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:11:00.067146    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:11:00.067156    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:11:00.091639    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:11:00.091647    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:11:00.132172    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:11:00.132184    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:11:00.136673    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:11:00.136682    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:11:00.170324    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:11:00.170336    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:11:00.181727    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:11:00.181738    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:11:00.195539    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:11:00.195555    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:11:00.209317    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:11:00.209325    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:11:00.230720    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:11:00.230733    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:11:02.746240    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:07.748097    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:07.748654    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:11:07.783453    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:11:07.783613    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:11:07.804395    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:11:07.804507    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:11:07.819255    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:11:07.819344    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:11:07.831617    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:11:07.831692    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:11:07.842554    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:11:07.842623    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:11:07.853648    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:11:07.853731    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:11:07.864478    4860 logs.go:276] 0 containers: []
	W0913 12:11:07.864490    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:11:07.864559    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:11:07.875209    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:11:07.875224    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:11:07.875229    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:11:07.914049    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:11:07.914061    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:11:07.929476    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:11:07.929490    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:11:07.942861    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:11:07.942875    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:11:07.965810    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:11:07.965819    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:11:08.004358    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:11:08.004381    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:11:08.028889    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:11:08.028902    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:11:08.048228    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:11:08.048242    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:11:08.052862    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:11:08.052871    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:11:08.066036    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:11:08.066068    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:11:08.077773    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:11:08.077785    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:11:08.089840    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:11:08.089858    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:11:08.114094    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:11:08.114104    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:11:08.129071    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:11:08.129081    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:11:08.140884    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:11:08.140893    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:11:08.152416    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:11:08.152427    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:11:08.163314    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:11:08.163324    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:11:10.678699    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:15.681342    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:15.681523    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:11:15.693717    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:11:15.693832    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:11:15.704416    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:11:15.704504    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:11:15.714948    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:11:15.715032    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:11:15.726196    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:11:15.726279    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:11:15.736839    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:11:15.736922    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:11:15.747993    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:11:15.748071    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:11:15.758739    4860 logs.go:276] 0 containers: []
	W0913 12:11:15.758750    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:11:15.758819    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:11:15.769490    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:11:15.769507    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:11:15.769512    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:11:15.774632    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:11:15.774640    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:11:15.786264    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:11:15.786277    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:11:15.797663    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:11:15.797674    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:11:15.809682    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:11:15.809693    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:11:15.833667    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:11:15.833675    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:11:15.870753    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:11:15.870766    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:11:15.883410    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:11:15.883425    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:11:15.897305    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:11:15.897316    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:11:15.910969    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:11:15.910978    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:11:15.922835    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:11:15.922848    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:11:15.935269    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:11:15.935284    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:11:15.952846    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:11:15.952855    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:11:15.964720    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:11:15.964735    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:11:16.005570    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:11:16.005589    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:11:16.020300    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:11:16.020310    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:11:16.033253    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:11:16.033262    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:11:18.546231    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:23.547836    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:23.548183    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:11:23.577757    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:11:23.577915    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:11:23.595950    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:11:23.596044    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:11:23.608655    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:11:23.608745    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:11:23.620356    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:11:23.620438    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:11:23.631316    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:11:23.631394    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:11:23.641994    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:11:23.642081    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:11:23.652057    4860 logs.go:276] 0 containers: []
	W0913 12:11:23.652066    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:11:23.652129    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:11:23.662642    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:11:23.662660    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:11:23.662666    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:11:23.679847    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:11:23.679858    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:11:23.691002    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:11:23.691013    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:11:23.730809    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:11:23.730817    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:11:23.745039    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:11:23.745049    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:11:23.756969    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:11:23.756978    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:11:23.771954    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:11:23.771966    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:11:23.784080    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:11:23.784091    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:11:23.797765    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:11:23.797780    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:11:23.809205    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:11:23.809217    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:11:23.821013    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:11:23.821023    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:11:23.832351    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:11:23.832361    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:11:23.854908    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:11:23.854915    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:11:23.868419    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:11:23.868430    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:11:23.872840    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:11:23.872847    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:11:23.910106    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:11:23.910117    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:11:23.921935    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:11:23.921948    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:11:26.436918    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:31.438795    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:31.438941    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:11:31.451548    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:11:31.451639    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:11:31.463637    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:11:31.463720    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:11:31.478406    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:11:31.478489    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:11:31.490439    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:11:31.490522    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:11:31.504395    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:11:31.504495    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:11:31.520772    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:11:31.520864    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:11:31.532537    4860 logs.go:276] 0 containers: []
	W0913 12:11:31.532549    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:11:31.532623    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:11:31.544306    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:11:31.544324    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:11:31.544330    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:11:31.562256    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:11:31.562268    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:11:31.574706    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:11:31.574717    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:11:31.599707    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:11:31.599721    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:11:31.640765    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:11:31.640782    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:11:31.655122    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:11:31.655137    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:11:31.675553    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:11:31.675573    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:11:31.692212    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:11:31.692229    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:11:31.705696    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:11:31.705712    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:11:31.719177    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:11:31.719194    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:11:31.732947    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:11:31.732959    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:11:31.738343    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:11:31.738354    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:11:31.752429    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:11:31.752446    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:11:31.764963    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:11:31.764976    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:11:31.778508    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:11:31.778522    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:11:31.820538    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:11:31.820560    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:11:31.836992    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:11:31.837006    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:11:34.351891    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:39.354033    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:39.354591    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:11:39.392491    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:11:39.392688    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:11:39.413070    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:11:39.413179    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:11:39.431704    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:11:39.431798    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:11:39.444521    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:11:39.444606    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:11:39.455108    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:11:39.455188    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:11:39.465681    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:11:39.465768    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:11:39.476997    4860 logs.go:276] 0 containers: []
	W0913 12:11:39.477007    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:11:39.477085    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:11:39.488381    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:11:39.488398    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:11:39.488404    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:11:39.526645    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:11:39.526657    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:11:39.539016    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:11:39.539028    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:11:39.558328    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:11:39.558340    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:11:39.562864    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:11:39.562874    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:11:39.575321    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:11:39.575335    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:11:39.590539    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:11:39.590555    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:11:39.603040    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:11:39.603055    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:11:39.615822    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:11:39.615833    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:11:39.641074    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:11:39.641087    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:11:39.657950    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:11:39.657961    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:11:39.677543    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:11:39.677555    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:11:39.689848    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:11:39.689862    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:11:39.702627    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:11:39.702638    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:11:39.744624    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:11:39.744640    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:11:39.761111    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:11:39.761124    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:11:39.773452    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:11:39.773464    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:11:42.288055    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:47.290123    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:47.290388    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:11:47.315897    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:11:47.316009    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:11:47.330567    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:11:47.330680    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:11:47.343102    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:11:47.343188    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:11:47.354813    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:11:47.354908    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:11:47.365351    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:11:47.365433    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:11:47.376732    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:11:47.376813    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:11:47.390780    4860 logs.go:276] 0 containers: []
	W0913 12:11:47.390793    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:11:47.390858    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:11:47.401377    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:11:47.401394    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:11:47.401399    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:11:47.413701    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:11:47.413712    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:11:47.427942    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:11:47.427955    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:11:47.439482    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:11:47.439494    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:11:47.455120    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:11:47.455131    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:11:47.471500    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:11:47.471512    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:11:47.483697    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:11:47.483710    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:11:47.506511    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:11:47.506525    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:11:47.519536    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:11:47.519547    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:11:47.542978    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:11:47.542990    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:11:47.577113    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:11:47.577124    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:11:47.602640    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:11:47.602654    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:11:47.614010    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:11:47.614021    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:11:47.625828    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:11:47.625840    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:11:47.663989    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:11:47.663997    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:11:47.668101    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:11:47.668106    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:11:47.681697    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:11:47.681706    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:11:50.195081    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:55.195318    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:55.195422    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:11:55.206881    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:11:55.206967    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:11:55.217761    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:11:55.217839    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:11:55.228214    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:11:55.228300    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:11:55.239620    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:11:55.239709    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:11:55.256164    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:11:55.256254    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:11:55.268532    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:11:55.268618    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:11:55.281268    4860 logs.go:276] 0 containers: []
	W0913 12:11:55.281281    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:11:55.281357    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:11:55.294842    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:11:55.294863    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:11:55.294869    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:11:55.306832    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:11:55.306848    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:11:55.347841    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:11:55.347855    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:11:55.359973    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:11:55.359984    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:11:55.371980    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:11:55.371994    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:11:55.390162    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:11:55.390174    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:11:55.403754    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:11:55.403768    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:11:55.425306    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:11:55.425317    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:11:55.440909    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:11:55.440918    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:11:55.452455    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:11:55.452470    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:11:55.463905    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:11:55.463920    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:11:55.487936    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:11:55.487948    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:11:55.499283    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:11:55.499294    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:11:55.511363    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:11:55.511375    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:11:55.525164    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:11:55.525173    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:11:55.542577    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:11:55.542590    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:11:55.547476    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:11:55.547488    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:11:58.087466    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:03.089728    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:03.090369    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:12:03.129455    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:12:03.129619    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:12:03.151695    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:12:03.151815    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:12:03.167020    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:12:03.167119    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:12:03.179595    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:12:03.179674    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:12:03.190480    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:12:03.190551    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:12:03.201140    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:12:03.201210    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:12:03.226663    4860 logs.go:276] 0 containers: []
	W0913 12:12:03.226679    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:12:03.226755    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:12:03.238773    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:12:03.238787    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:12:03.238792    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:12:03.253221    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:12:03.253230    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:12:03.264917    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:12:03.264927    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:12:03.276670    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:12:03.276681    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:12:03.299528    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:12:03.299536    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:12:03.337503    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:12:03.337514    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:12:03.350178    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:12:03.350190    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:12:03.364025    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:12:03.364034    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:12:03.375254    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:12:03.375264    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:12:03.388032    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:12:03.388045    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:12:03.392687    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:12:03.392695    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:12:03.406473    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:12:03.406484    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:12:03.418413    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:12:03.418425    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:12:03.430145    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:12:03.430157    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:12:03.464343    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:12:03.464352    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:12:03.480003    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:12:03.480017    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:12:03.492115    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:12:03.492124    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:12:06.014254    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:11.016717    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:11.016804    4860 kubeadm.go:597] duration metric: took 4m4.509684s to restartPrimaryControlPlane
	W0913 12:12:11.016855    4860 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 12:12:11.016874    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0913 12:12:12.022804    4860 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.005959s)
	I0913 12:12:12.022903    4860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 12:12:12.027661    4860 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 12:12:12.030439    4860 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 12:12:12.033043    4860 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 12:12:12.033049    4860 kubeadm.go:157] found existing configuration files:
	
	I0913 12:12:12.033070    4860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/admin.conf
	I0913 12:12:12.036199    4860 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 12:12:12.036221    4860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 12:12:12.039737    4860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/kubelet.conf
	I0913 12:12:12.042183    4860 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 12:12:12.042204    4860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 12:12:12.044896    4860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/controller-manager.conf
	I0913 12:12:12.047770    4860 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 12:12:12.047798    4860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 12:12:12.050346    4860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/scheduler.conf
	I0913 12:12:12.052833    4860 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 12:12:12.052853    4860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 12:12:12.055815    4860 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 12:12:12.074507    4860 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0913 12:12:12.074641    4860 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 12:12:12.123551    4860 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 12:12:12.123612    4860 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 12:12:12.123669    4860 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 12:12:12.176774    4860 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 12:12:12.179945    4860 out.go:235]   - Generating certificates and keys ...
	I0913 12:12:12.179978    4860 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 12:12:12.180011    4860 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 12:12:12.180049    4860 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 12:12:12.180101    4860 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 12:12:12.180139    4860 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 12:12:12.180184    4860 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 12:12:12.180237    4860 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 12:12:12.180280    4860 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 12:12:12.180326    4860 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 12:12:12.180369    4860 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 12:12:12.180395    4860 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 12:12:12.180421    4860 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 12:12:12.327151    4860 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 12:12:12.394420    4860 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 12:12:12.656154    4860 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 12:12:12.759542    4860 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 12:12:12.786910    4860 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 12:12:12.787285    4860 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 12:12:12.787306    4860 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 12:12:12.877259    4860 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 12:12:12.881282    4860 out.go:235]   - Booting up control plane ...
	I0913 12:12:12.881335    4860 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 12:12:12.881377    4860 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 12:12:12.881408    4860 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 12:12:12.881467    4860 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 12:12:12.881582    4860 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 12:12:17.382308    4860 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503421 seconds
	I0913 12:12:17.382372    4860 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 12:12:17.386602    4860 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 12:12:17.903487    4860 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 12:12:17.903902    4860 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-383000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 12:12:18.407221    4860 kubeadm.go:310] [bootstrap-token] Using token: dqur8d.53xl1lhmd8qyl1lx
	I0913 12:12:18.409844    4860 out.go:235]   - Configuring RBAC rules ...
	I0913 12:12:18.409909    4860 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 12:12:18.409959    4860 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 12:12:18.411758    4860 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 12:12:18.413380    4860 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 12:12:18.414302    4860 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 12:12:18.415268    4860 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 12:12:18.418396    4860 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 12:12:18.577341    4860 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 12:12:18.810682    4860 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 12:12:18.811316    4860 kubeadm.go:310] 
	I0913 12:12:18.811356    4860 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 12:12:18.811365    4860 kubeadm.go:310] 
	I0913 12:12:18.811408    4860 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 12:12:18.811417    4860 kubeadm.go:310] 
	I0913 12:12:18.811432    4860 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 12:12:18.811469    4860 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 12:12:18.811506    4860 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 12:12:18.811511    4860 kubeadm.go:310] 
	I0913 12:12:18.811547    4860 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 12:12:18.811551    4860 kubeadm.go:310] 
	I0913 12:12:18.811591    4860 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 12:12:18.811596    4860 kubeadm.go:310] 
	I0913 12:12:18.811623    4860 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 12:12:18.811667    4860 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 12:12:18.811850    4860 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 12:12:18.811857    4860 kubeadm.go:310] 
	I0913 12:12:18.811922    4860 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 12:12:18.812092    4860 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 12:12:18.812102    4860 kubeadm.go:310] 
	I0913 12:12:18.812173    4860 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dqur8d.53xl1lhmd8qyl1lx \
	I0913 12:12:18.812262    4860 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de4133d636792fabd6bfbc110ddb1dc6f2d65eb5bd69b961dc2a84dacbe86065 \
	I0913 12:12:18.812284    4860 kubeadm.go:310] 	--control-plane 
	I0913 12:12:18.812286    4860 kubeadm.go:310] 
	I0913 12:12:18.812358    4860 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 12:12:18.812361    4860 kubeadm.go:310] 
	I0913 12:12:18.812428    4860 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dqur8d.53xl1lhmd8qyl1lx \
	I0913 12:12:18.812517    4860 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de4133d636792fabd6bfbc110ddb1dc6f2d65eb5bd69b961dc2a84dacbe86065 
	I0913 12:12:18.812606    4860 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 12:12:18.812611    4860 cni.go:84] Creating CNI manager for ""
	I0913 12:12:18.812622    4860 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:12:18.814513    4860 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 12:12:18.822318    4860 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 12:12:18.825612    4860 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 12:12:18.830392    4860 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 12:12:18.830442    4860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 12:12:18.830461    4860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-383000 minikube.k8s.io/updated_at=2024_09_13T12_12_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=running-upgrade-383000 minikube.k8s.io/primary=true
	I0913 12:12:18.836814    4860 ops.go:34] apiserver oom_adj: -16
	I0913 12:12:18.862001    4860 kubeadm.go:1113] duration metric: took 31.600292ms to wait for elevateKubeSystemPrivileges
	I0913 12:12:18.872127    4860 kubeadm.go:394] duration metric: took 4m12.380592542s to StartCluster
	I0913 12:12:18.872144    4860 settings.go:142] acquiring lock: {Name:mk30414fb8bdc9357b580933d1c04157a3bd6358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:12:18.872237    4860 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:12:18.872621    4860 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/kubeconfig: {Name:mk70034871f305cb9ef95a7630262c04e6c4f7f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:12:18.872823    4860 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:12:18.872913    4860 config.go:182] Loaded profile config "running-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:12:18.872953    4860 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 12:12:18.872986    4860 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-383000"
	I0913 12:12:18.872990    4860 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-383000"
	I0913 12:12:18.872995    4860 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-383000"
	I0913 12:12:18.872996    4860 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-383000"
	W0913 12:12:18.872998    4860 addons.go:243] addon storage-provisioner should already be in state true
	I0913 12:12:18.873009    4860 host.go:66] Checking if "running-upgrade-383000" exists ...
	I0913 12:12:18.874066    4860 kapi.go:59] client config for running-upgrade-383000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/client.key", CAFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040bd540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 12:12:18.874195    4860 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-383000"
	W0913 12:12:18.874200    4860 addons.go:243] addon default-storageclass should already be in state true
	I0913 12:12:18.874208    4860 host.go:66] Checking if "running-upgrade-383000" exists ...
	I0913 12:12:18.877144    4860 out.go:177] * Verifying Kubernetes components...
	I0913 12:12:18.877490    4860 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 12:12:18.881397    4860 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 12:12:18.881405    4860 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/running-upgrade-383000/id_rsa Username:docker}
	I0913 12:12:18.885063    4860 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:12:18.889164    4860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:12:18.893210    4860 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 12:12:18.893216    4860 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 12:12:18.893222    4860 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/running-upgrade-383000/id_rsa Username:docker}
	I0913 12:12:18.978440    4860 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 12:12:18.983447    4860 api_server.go:52] waiting for apiserver process to appear ...
	I0913 12:12:18.983499    4860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 12:12:18.988558    4860 api_server.go:72] duration metric: took 115.728834ms to wait for apiserver process to appear ...
	I0913 12:12:18.988566    4860 api_server.go:88] waiting for apiserver healthz status ...
	I0913 12:12:18.988573    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:18.992539    4860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 12:12:19.012718    4860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 12:12:19.330975    4860 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0913 12:12:19.330989    4860 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0913 12:12:23.989746    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:23.989794    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:28.990272    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:28.990304    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:33.990374    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:33.990403    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:38.990558    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:38.990619    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:43.990856    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:43.990916    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:48.991410    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:48.991456    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0913 12:12:49.332241    4860 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0913 12:12:49.336346    4860 out.go:177] * Enabled addons: storage-provisioner
	I0913 12:12:49.344483    4860 addons.go:510] duration metric: took 30.472845084s for enable addons: enabled=[storage-provisioner]
	I0913 12:12:53.992217    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:53.992355    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:58.993459    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:58.993501    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:03.994735    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:03.994776    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:08.996802    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:08.996824    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:13.998852    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:13.998906    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:19.000990    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:19.001102    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:19.011659    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:13:19.011744    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:19.022367    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:13:19.022446    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:19.032809    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:13:19.032887    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:19.043448    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:13:19.043524    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:19.053786    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:13:19.053871    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:19.063534    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:13:19.063610    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:19.079071    4860 logs.go:276] 0 containers: []
	W0913 12:13:19.079088    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:19.079160    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:19.089613    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:13:19.089629    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:19.089634    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:19.122256    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:19.122262    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:19.161289    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:13:19.161298    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:13:19.176571    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:13:19.176581    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:13:19.191002    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:13:19.191012    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:13:19.204107    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:13:19.204119    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:13:19.215719    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:13:19.215733    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:13:19.238467    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:13:19.238477    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:13:19.250029    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:13:19.250038    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:19.261979    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:19.261992    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:19.266957    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:13:19.266965    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:13:19.281973    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:13:19.281987    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:13:19.293178    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:19.293188    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:21.818456    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:26.820896    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:26.821060    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:26.835292    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:13:26.835383    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:26.846974    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:13:26.847055    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:26.857926    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:13:26.858002    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:26.869176    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:13:26.869248    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:26.879482    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:13:26.879550    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:26.889771    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:13:26.889839    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:26.899991    4860 logs.go:276] 0 containers: []
	W0913 12:13:26.900004    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:26.900069    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:26.912886    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:13:26.912900    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:26.912906    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:26.948801    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:13:26.948813    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:13:26.960783    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:13:26.960797    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:13:26.977777    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:26.977787    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:27.001149    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:13:27.001161    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:13:27.016650    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:13:27.016662    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:13:27.028389    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:13:27.028405    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:13:27.040463    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:27.040474    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:27.074743    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:27.074757    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:27.079547    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:13:27.079555    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:13:27.093983    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:13:27.093994    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:13:27.107384    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:13:27.107397    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:13:27.118759    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:13:27.118772    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:29.636783    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:34.639193    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:34.639420    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:34.666385    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:13:34.666494    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:34.688730    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:13:34.688809    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:34.699432    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:13:34.699517    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:34.710024    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:13:34.710108    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:34.720285    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:13:34.720368    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:34.736616    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:13:34.736699    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:34.750703    4860 logs.go:276] 0 containers: []
	W0913 12:13:34.750719    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:34.750791    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:34.760742    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:13:34.760757    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:13:34.760763    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:13:34.775330    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:13:34.775341    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:13:34.793355    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:34.793365    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:34.816770    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:34.816778    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:34.820980    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:34.820986    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:34.859154    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:13:34.859166    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:13:34.871023    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:13:34.871034    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:13:34.882821    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:13:34.882831    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:13:34.898087    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:13:34.898098    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:13:34.910151    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:13:34.910163    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:13:34.921812    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:13:34.921823    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:34.933362    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:34.933372    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:34.966256    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:13:34.966264    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:13:37.481735    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:42.483817    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:42.484015    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:42.498307    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:13:42.498404    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:42.512420    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:13:42.512496    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:42.522944    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:13:42.523022    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:42.533438    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:13:42.533524    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:42.543473    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:13:42.543554    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:42.553850    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:13:42.553931    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:42.564226    4860 logs.go:276] 0 containers: []
	W0913 12:13:42.564238    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:42.564311    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:42.574838    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:13:42.574854    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:13:42.574860    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:13:42.592809    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:13:42.592822    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:42.608850    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:42.608861    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:42.642072    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:42.642084    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:42.646890    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:13:42.646898    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:13:42.663331    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:13:42.663341    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:13:42.675322    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:13:42.675333    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:13:42.687127    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:42.687139    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:42.710374    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:42.710382    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:42.745529    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:13:42.745544    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:13:42.760437    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:13:42.760452    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:13:42.771854    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:13:42.771869    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:13:42.791633    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:13:42.791642    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:13:45.303428    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:50.305508    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:50.305678    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:50.318866    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:13:50.318956    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:50.332428    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:13:50.332504    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:50.347173    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:13:50.347272    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:50.357846    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:13:50.357928    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:50.367922    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:13:50.368000    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:50.378416    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:13:50.378492    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:50.388702    4860 logs.go:276] 0 containers: []
	W0913 12:13:50.388716    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:50.388788    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:50.399156    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:13:50.399169    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:50.399175    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:50.403798    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:13:50.403805    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:13:50.415198    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:50.415209    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:50.439549    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:13:50.439559    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:50.451267    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:50.451278    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:50.485853    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:13:50.485861    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:13:50.500676    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:13:50.500687    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:13:50.514729    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:13:50.514740    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:13:50.525817    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:13:50.525829    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:13:50.540326    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:13:50.540338    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:13:50.552115    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:13:50.552127    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:13:50.569716    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:13:50.569726    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:13:50.581587    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:50.581597    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:53.120275    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:58.122540    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:58.122769    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:58.143745    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:13:58.143857    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:58.160979    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:13:58.161073    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:58.175551    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:13:58.175634    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:58.186520    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:13:58.186607    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:58.197813    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:13:58.197896    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:58.208628    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:13:58.208713    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:58.222701    4860 logs.go:276] 0 containers: []
	W0913 12:13:58.222714    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:58.222781    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:58.233189    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:13:58.233203    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:13:58.233208    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:13:58.249445    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:13:58.249455    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:13:58.265413    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:13:58.265424    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:13:58.277133    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:58.277144    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:58.313809    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:58.313821    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:58.318953    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:58.318960    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:58.355301    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:13:58.355318    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:13:58.369626    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:13:58.369635    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:13:58.381510    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:58.381519    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:58.406791    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:13:58.406798    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:13:58.420910    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:13:58.420923    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:13:58.440046    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:13:58.440059    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:13:58.451504    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:13:58.451515    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:00.966139    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:05.968231    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:05.968405    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:05.984297    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:14:05.984406    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:05.996765    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:14:05.996847    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:06.007568    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:14:06.007641    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:06.017804    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:14:06.017880    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:06.028140    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:14:06.028225    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:06.038674    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:14:06.038748    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:06.048429    4860 logs.go:276] 0 containers: []
	W0913 12:14:06.048442    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:06.048508    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:06.058610    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:14:06.058628    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:14:06.058633    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:14:06.070744    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:14:06.070758    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:14:06.090579    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:14:06.090591    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:14:06.102135    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:06.102150    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:06.106872    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:06.106879    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:06.141227    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:14:06.141238    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:14:06.159278    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:14:06.159287    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:14:06.171281    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:14:06.171290    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:14:06.186984    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:06.186994    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:06.211600    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:06.211610    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:06.246326    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:14:06.246336    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:14:06.260221    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:14:06.260234    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:14:06.271514    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:14:06.271525    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:08.785728    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:13.787161    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:13.787304    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:13.798647    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:14:13.798728    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:13.809687    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:14:13.809776    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:13.819878    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:14:13.819960    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:13.830959    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:14:13.831038    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:13.848035    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:14:13.848116    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:13.859154    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:14:13.859234    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:13.869561    4860 logs.go:276] 0 containers: []
	W0913 12:14:13.869582    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:13.869651    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:13.880232    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:14:13.880245    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:13.880252    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:13.915585    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:13.915593    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:13.950892    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:14:13.950903    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:14:13.963004    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:14:13.963016    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:14:13.975100    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:14:13.975112    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:14:13.987071    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:13.987083    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:14.011058    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:14.011066    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:14.015126    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:14:14.015132    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:14:14.028809    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:14:14.028822    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:14:14.042813    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:14:14.042824    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:14:14.057534    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:14:14.057545    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:14:14.086697    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:14:14.086707    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:14:14.102464    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:14:14.102475    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:16.615633    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:21.617989    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:21.618097    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:21.629705    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:14:21.629795    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:21.643458    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:14:21.643549    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:21.655118    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:14:21.655203    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:21.666724    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:14:21.666802    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:21.677749    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:14:21.677843    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:21.689037    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:14:21.689123    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:21.705292    4860 logs.go:276] 0 containers: []
	W0913 12:14:21.705307    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:21.705379    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:21.716151    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:14:21.716165    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:21.716170    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:21.753049    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:14:21.753063    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:14:21.768572    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:14:21.768586    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:14:21.783051    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:14:21.783065    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:14:21.794721    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:14:21.794736    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:14:21.806297    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:14:21.806306    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:14:21.823946    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:14:21.823957    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:14:21.835943    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:21.835954    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:21.860926    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:14:21.860937    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:21.873101    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:21.873116    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:21.908210    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:21.908219    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:21.912844    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:14:21.912851    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:14:21.923848    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:14:21.923863    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:14:24.445271    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:29.447382    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:29.447481    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:29.459146    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:14:29.459233    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:29.470692    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:14:29.470778    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:29.487814    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:14:29.487897    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:29.499164    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:14:29.499242    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:29.509748    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:14:29.509836    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:29.520689    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:14:29.520774    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:29.532169    4860 logs.go:276] 0 containers: []
	W0913 12:14:29.532181    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:29.532253    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:29.543126    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:14:29.543142    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:14:29.543149    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:14:29.558290    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:14:29.558299    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:14:29.572723    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:14:29.572737    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:14:29.585848    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:14:29.585861    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:14:29.602137    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:14:29.602149    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:14:29.615134    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:29.615150    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:29.638984    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:14:29.638993    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:29.650575    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:29.650587    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:29.683285    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:29.683293    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:29.687461    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:29.687468    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:29.721559    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:14:29.721570    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:14:29.733240    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:14:29.733254    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:14:29.744704    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:14:29.744718    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:14:32.262274    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:37.264367    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:37.264486    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:37.281375    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:14:37.281456    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:37.292693    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:14:37.292778    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:37.305595    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:14:37.305683    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:37.317545    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:14:37.317631    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:37.329346    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:14:37.329431    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:37.340458    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:14:37.340542    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:37.352604    4860 logs.go:276] 0 containers: []
	W0913 12:14:37.352616    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:37.352687    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:37.365112    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:14:37.365131    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:37.365137    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:37.403120    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:14:37.403133    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:14:37.415380    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:14:37.415392    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:37.428128    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:37.428143    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:37.453561    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:14:37.453575    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:14:37.469546    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:14:37.469557    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:14:37.483595    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:14:37.483607    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:14:37.500092    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:14:37.500103    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:14:37.520778    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:14:37.520789    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:14:37.536104    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:14:37.536117    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:14:37.551025    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:14:37.551037    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:14:37.565023    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:14:37.565037    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:14:37.580309    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:37.580323    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:37.614851    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:37.614869    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:37.619388    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:14:37.619395    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:14:40.135551    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:45.137856    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:45.138026    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:45.156348    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:14:45.156440    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:45.171808    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:14:45.171897    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:45.184081    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:14:45.184165    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:45.197797    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:14:45.197879    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:45.209904    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:14:45.210037    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:45.221482    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:14:45.221559    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:45.237783    4860 logs.go:276] 0 containers: []
	W0913 12:14:45.237793    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:45.237865    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:45.251862    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:14:45.251880    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:14:45.251885    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:14:45.264579    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:14:45.264592    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:14:45.277669    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:14:45.277686    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:14:45.293656    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:14:45.293674    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:14:45.314420    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:45.314432    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:45.319160    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:14:45.319167    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:14:45.332234    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:45.332247    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:45.370215    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:14:45.370227    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:14:45.389152    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:14:45.389164    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:14:45.402636    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:14:45.402651    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:14:45.415087    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:45.415115    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:45.441816    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:14:45.441829    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:45.454270    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:45.454282    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:45.491685    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:14:45.491697    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:14:45.507369    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:14:45.507385    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:14:48.020874    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:53.023013    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:53.023296    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:53.046459    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:14:53.046596    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:53.061404    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:14:53.061498    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:53.075049    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:14:53.075137    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:53.086521    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:14:53.086603    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:53.097973    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:14:53.098053    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:53.110417    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:14:53.110505    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:53.121697    4860 logs.go:276] 0 containers: []
	W0913 12:14:53.121710    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:53.121785    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:53.134061    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:14:53.134080    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:53.134086    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:53.171115    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:14:53.171134    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:14:53.183956    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:14:53.183969    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:14:53.199342    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:14:53.199351    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:14:53.213015    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:53.213025    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:53.218209    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:53.218220    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:53.255018    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:14:53.255030    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:14:53.273040    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:14:53.273050    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:14:53.286152    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:14:53.286166    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:14:53.301108    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:14:53.301123    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:14:53.314261    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:14:53.314273    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:14:53.335365    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:14:53.335376    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:14:53.348496    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:14:53.348509    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:14:53.364416    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:53.364427    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:53.390540    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:14:53.390551    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:55.905574    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:00.907752    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:00.907937    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:00.921018    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:00.921096    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:00.931824    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:00.931909    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:00.942239    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:00.942308    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:00.952851    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:00.952918    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:00.963464    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:00.963545    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:00.974766    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:00.974805    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:00.986213    4860 logs.go:276] 0 containers: []
	W0913 12:15:00.986223    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:00.986261    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:00.997972    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:00.998022    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:00.998029    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:01.034277    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:01.034289    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:01.047274    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:01.047286    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:01.062366    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:01.062381    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:01.075393    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:01.075406    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:01.091588    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:01.091602    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:01.103855    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:01.103867    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:01.122451    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:01.122461    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:01.134692    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:01.134705    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:01.147454    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:01.147471    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:01.169738    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:01.169750    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:01.195032    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:01.195048    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:01.208397    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:01.208409    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:01.213881    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:01.213889    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:01.251848    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:01.251859    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:03.767277    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:08.769493    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:08.769631    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:08.780635    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:08.780718    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:08.791655    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:08.791750    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:08.802852    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:08.802934    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:08.813532    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:08.813601    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:08.824817    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:08.824897    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:08.835554    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:08.835637    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:08.851184    4860 logs.go:276] 0 containers: []
	W0913 12:15:08.851197    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:08.851270    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:08.861560    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:08.861578    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:08.861584    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:08.877173    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:08.877183    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:08.893910    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:08.893921    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:08.909582    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:08.909595    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:08.922301    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:08.922309    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:08.937641    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:08.937653    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:08.951342    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:08.951355    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:08.977824    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:08.977840    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:09.015944    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:09.015956    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:09.029945    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:09.029962    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:09.042711    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:09.042725    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:09.055349    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:09.055361    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:09.074336    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:09.074348    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:09.086772    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:09.086781    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:09.122726    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:09.122737    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:11.629748    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:16.632011    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:16.632458    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:16.667456    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:16.667617    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:16.686754    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:16.686867    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:16.700546    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:16.700642    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:16.712208    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:16.712286    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:16.725337    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:16.725418    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:16.735856    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:16.735942    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:16.748180    4860 logs.go:276] 0 containers: []
	W0913 12:15:16.748192    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:16.748264    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:16.758974    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:16.758991    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:16.758997    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:16.792790    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:16.792807    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:16.808149    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:16.808159    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:16.821669    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:16.821681    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:16.838022    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:16.838039    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:16.851303    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:16.851315    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:16.864783    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:16.864794    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:16.878033    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:16.878048    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:16.891086    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:16.891098    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:16.911885    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:16.911897    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:16.924528    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:16.924539    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:16.950690    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:16.950709    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:16.987651    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:16.987661    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:16.993012    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:16.993019    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:17.011586    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:17.011599    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:19.526159    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:24.528248    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:24.528572    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:24.555068    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:24.555199    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:24.574116    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:24.574214    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:24.587157    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:24.587238    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:24.598447    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:24.598526    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:24.608840    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:24.608919    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:24.624416    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:24.624490    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:24.634727    4860 logs.go:276] 0 containers: []
	W0913 12:15:24.634738    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:24.634807    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:24.645318    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:24.645337    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:24.645343    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:24.682741    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:24.682751    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:24.697614    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:24.697625    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:24.710171    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:24.710181    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:24.722505    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:24.722520    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:24.739408    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:24.739419    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:24.753519    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:24.753532    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:24.766766    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:24.766774    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:24.781255    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:24.781269    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:24.798792    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:24.798803    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:24.803863    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:24.803874    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:24.817105    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:24.817117    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:24.851948    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:24.851966    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:24.867675    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:24.867688    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:24.889119    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:24.889140    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:27.416881    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:32.418987    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:32.419176    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:32.431487    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:32.431564    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:32.441696    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:32.441762    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:32.451961    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:32.452030    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:32.462166    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:32.462245    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:32.472773    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:32.472854    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:32.487118    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:32.487201    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:32.497339    4860 logs.go:276] 0 containers: []
	W0913 12:15:32.497350    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:32.497415    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:32.508834    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:32.508850    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:32.508856    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:32.513195    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:32.513201    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:32.547928    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:32.547940    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:32.562787    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:32.562797    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:32.585159    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:32.585171    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:32.610362    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:32.610381    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:32.624327    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:32.624337    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:32.639927    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:32.639940    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:32.661828    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:32.661840    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:32.674356    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:32.674368    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:32.709974    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:32.709990    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:32.723899    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:32.723912    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:32.737220    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:32.737229    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:32.749522    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:32.749536    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:32.762011    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:32.762027    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:35.276478    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:40.278961    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:40.279528    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:40.316261    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:40.316428    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:40.335929    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:40.336036    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:40.350870    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:40.350963    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:40.363293    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:40.363381    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:40.373877    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:40.373951    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:40.384636    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:40.384716    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:40.396325    4860 logs.go:276] 0 containers: []
	W0913 12:15:40.396334    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:40.396400    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:40.406655    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:40.406673    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:40.406679    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:40.443163    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:40.443175    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:40.455882    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:40.455892    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:40.467836    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:40.467847    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:40.479825    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:40.479840    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:40.515357    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:40.515367    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:40.519850    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:40.519859    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:40.537884    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:40.537895    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:40.557135    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:40.557146    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:40.571502    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:40.571512    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:40.584093    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:40.584105    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:40.600334    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:40.600348    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:40.626743    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:40.626761    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:40.639797    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:40.639808    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:40.657284    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:40.657297    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:43.171047    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:48.173082    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:48.173203    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:48.190313    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:48.190408    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:48.203015    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:48.203103    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:48.215123    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:48.215215    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:48.227613    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:48.227698    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:48.238492    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:48.238573    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:48.249601    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:48.249676    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:48.260678    4860 logs.go:276] 0 containers: []
	W0913 12:15:48.260691    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:48.260763    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:48.272878    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:48.272898    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:48.272905    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:48.296797    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:48.296814    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:48.330948    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:48.330958    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:48.348919    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:48.348928    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:48.361560    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:48.361575    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:48.374190    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:48.374201    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:48.386221    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:48.386232    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:48.406886    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:48.406896    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:48.419798    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:48.419808    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:48.434579    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:48.434591    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:48.449673    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:48.449687    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:48.461579    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:48.461591    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:48.474711    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:48.474724    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:48.479737    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:48.479746    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:48.516186    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:48.516198    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:51.036264    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:56.038380    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:56.038556    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:56.051639    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:56.051730    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:56.062866    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:56.062941    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:56.073554    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:56.073643    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:56.085432    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:56.085515    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:56.096392    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:56.096479    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:56.106871    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:56.106956    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:56.117558    4860 logs.go:276] 0 containers: []
	W0913 12:15:56.117569    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:56.117635    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:56.128009    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:56.128027    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:56.128032    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:56.142548    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:56.142558    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:56.154398    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:56.154408    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:56.165932    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:56.165945    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:56.182974    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:56.182983    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:56.207830    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:56.207843    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:56.242074    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:56.242086    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:56.260034    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:56.260044    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:56.275362    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:56.275372    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:56.287986    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:56.288001    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:56.306924    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:56.306935    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:56.318997    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:56.319011    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:56.331108    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:56.331119    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:56.366640    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:56.366648    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:56.371261    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:56.371270    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:58.885516    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:03.887586    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:03.887758    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:16:03.900435    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:16:03.900525    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:16:03.911193    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:16:03.911277    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:16:03.921738    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:16:03.921823    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:16:03.932137    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:16:03.932218    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:16:03.942204    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:16:03.942288    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:16:03.953365    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:16:03.953440    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:16:03.965469    4860 logs.go:276] 0 containers: []
	W0913 12:16:03.965480    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:16:03.965553    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:16:03.983973    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:16:03.983990    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:16:03.983996    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:16:03.996290    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:16:03.996303    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:16:04.012287    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:16:04.012296    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:16:04.023874    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:16:04.023883    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:16:04.028355    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:16:04.028362    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:16:04.062296    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:16:04.062308    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:16:04.080468    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:16:04.080480    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:16:04.092448    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:16:04.092459    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:16:04.116106    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:16:04.116116    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:16:04.148896    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:16:04.148904    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:16:04.171214    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:16:04.171225    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:16:04.185369    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:16:04.185384    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:16:04.200833    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:16:04.200846    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:16:04.212792    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:16:04.212804    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:16:04.224738    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:16:04.224749    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:16:06.738351    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:11.740620    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:11.740878    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:16:11.763406    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:16:11.763531    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:16:11.779808    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:16:11.779905    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:16:11.796410    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:16:11.796497    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:16:11.807121    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:16:11.807196    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:16:11.817271    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:16:11.817351    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:16:11.828666    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:16:11.828750    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:16:11.840337    4860 logs.go:276] 0 containers: []
	W0913 12:16:11.840349    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:16:11.840428    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:16:11.850718    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:16:11.850749    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:16:11.850757    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:16:11.883507    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:16:11.883516    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:16:11.898021    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:16:11.898032    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:16:11.910103    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:16:11.910119    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:16:11.922275    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:16:11.922289    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:16:11.947456    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:16:11.947466    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:16:11.960582    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:16:11.960596    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:16:11.972900    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:16:11.972914    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:16:11.977521    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:16:11.977527    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:16:11.993122    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:16:11.993133    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:16:12.007818    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:16:12.007830    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:16:12.019989    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:16:12.020000    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:16:12.035000    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:16:12.035011    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:16:12.052611    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:16:12.052621    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:16:12.065070    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:16:12.065081    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:16:14.601964    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:19.603965    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:19.609651    4860 out.go:201] 
	W0913 12:16:19.613462    4860 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0913 12:16:19.613475    4860 out.go:270] * 
	* 
	W0913 12:16:19.614440    4860 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:16:19.625574    4860 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-383000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-13 12:16:19.732278 -0700 PDT m=+3388.218090543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-383000 -n running-upgrade-383000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-383000 -n running-upgrade-383000: exit status 2 (15.529806042s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-383000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-254000          | force-systemd-flag-254000 | jenkins | v1.34.0 | 13 Sep 24 12:06 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-174000              | force-systemd-env-174000  | jenkins | v1.34.0 | 13 Sep 24 12:06 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-174000           | force-systemd-env-174000  | jenkins | v1.34.0 | 13 Sep 24 12:06 PDT | 13 Sep 24 12:06 PDT |
	| start   | -p docker-flags-661000                | docker-flags-661000       | jenkins | v1.34.0 | 13 Sep 24 12:06 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-254000             | force-systemd-flag-254000 | jenkins | v1.34.0 | 13 Sep 24 12:06 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-254000          | force-systemd-flag-254000 | jenkins | v1.34.0 | 13 Sep 24 12:06 PDT | 13 Sep 24 12:06 PDT |
	| start   | -p cert-expiration-947000             | cert-expiration-947000    | jenkins | v1.34.0 | 13 Sep 24 12:06 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-661000 ssh               | docker-flags-661000       | jenkins | v1.34.0 | 13 Sep 24 12:06 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-661000 ssh               | docker-flags-661000       | jenkins | v1.34.0 | 13 Sep 24 12:06 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-661000                | docker-flags-661000       | jenkins | v1.34.0 | 13 Sep 24 12:06 PDT | 13 Sep 24 12:06 PDT |
	| start   | -p cert-options-682000                | cert-options-682000       | jenkins | v1.34.0 | 13 Sep 24 12:06 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-682000 ssh               | cert-options-682000       | jenkins | v1.34.0 | 13 Sep 24 12:06 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-682000 -- sudo        | cert-options-682000       | jenkins | v1.34.0 | 13 Sep 24 12:06 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-682000                | cert-options-682000       | jenkins | v1.34.0 | 13 Sep 24 12:06 PDT | 13 Sep 24 12:06 PDT |
	| start   | -p running-upgrade-383000             | minikube                  | jenkins | v1.26.0 | 13 Sep 24 12:06 PDT | 13 Sep 24 12:07 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-383000             | running-upgrade-383000    | jenkins | v1.34.0 | 13 Sep 24 12:07 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-947000             | cert-expiration-947000    | jenkins | v1.34.0 | 13 Sep 24 12:09 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-947000             | cert-expiration-947000    | jenkins | v1.34.0 | 13 Sep 24 12:09 PDT | 13 Sep 24 12:09 PDT |
	| start   | -p kubernetes-upgrade-965000          | kubernetes-upgrade-965000 | jenkins | v1.34.0 | 13 Sep 24 12:09 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-965000          | kubernetes-upgrade-965000 | jenkins | v1.34.0 | 13 Sep 24 12:10 PDT | 13 Sep 24 12:10 PDT |
	| start   | -p kubernetes-upgrade-965000          | kubernetes-upgrade-965000 | jenkins | v1.34.0 | 13 Sep 24 12:10 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-965000          | kubernetes-upgrade-965000 | jenkins | v1.34.0 | 13 Sep 24 12:10 PDT | 13 Sep 24 12:10 PDT |
	| start   | -p stopped-upgrade-748000             | minikube                  | jenkins | v1.26.0 | 13 Sep 24 12:10 PDT | 13 Sep 24 12:10 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-748000 stop           | minikube                  | jenkins | v1.26.0 | 13 Sep 24 12:10 PDT | 13 Sep 24 12:11 PDT |
	| start   | -p stopped-upgrade-748000             | stopped-upgrade-748000    | jenkins | v1.34.0 | 13 Sep 24 12:11 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 12:11:06
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 12:11:06.936912    5002 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:11:06.937099    5002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:11:06.937106    5002 out.go:358] Setting ErrFile to fd 2...
	I0913 12:11:06.937109    5002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:11:06.937247    5002 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:11:06.938432    5002 out.go:352] Setting JSON to false
	I0913 12:11:06.958403    5002 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4229,"bootTime":1726250437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:11:06.958479    5002 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:11:06.963406    5002 out.go:177] * [stopped-upgrade-748000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:11:06.971424    5002 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:11:06.971468    5002 notify.go:220] Checking for updates...
	I0913 12:11:06.976897    5002 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:11:06.980366    5002 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:11:06.983373    5002 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:11:06.986380    5002 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:11:06.989459    5002 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:11:06.992669    5002 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:11:06.996395    5002 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 12:11:06.999385    5002 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:11:07.003315    5002 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 12:11:07.010263    5002 start.go:297] selected driver: qemu2
	I0913 12:11:07.010269    5002 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 12:11:07.010317    5002 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:11:07.012872    5002 cni.go:84] Creating CNI manager for ""
	I0913 12:11:07.012910    5002 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:11:07.012938    5002 start.go:340] cluster config:
	{Name:stopped-upgrade-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 12:11:07.012992    5002 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:11:07.020195    5002 out.go:177] * Starting "stopped-upgrade-748000" primary control-plane node in "stopped-upgrade-748000" cluster
	I0913 12:11:07.024381    5002 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0913 12:11:07.024397    5002 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0913 12:11:07.024408    5002 cache.go:56] Caching tarball of preloaded images
	I0913 12:11:07.024475    5002 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:11:07.024481    5002 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0913 12:11:07.024531    5002 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/config.json ...
	I0913 12:11:07.025022    5002 start.go:360] acquireMachinesLock for stopped-upgrade-748000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:11:07.025056    5002 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "stopped-upgrade-748000"
	I0913 12:11:07.025064    5002 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:11:07.025070    5002 fix.go:54] fixHost starting: 
	I0913 12:11:07.025185    5002 fix.go:112] recreateIfNeeded on stopped-upgrade-748000: state=Stopped err=<nil>
	W0913 12:11:07.025193    5002 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:11:07.029149    5002 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-748000" ...
	I0913 12:11:07.748097    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:07.748654    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:11:07.783453    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:11:07.783613    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:11:07.804395    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:11:07.804507    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:11:07.819255    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:11:07.819344    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:11:07.831617    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:11:07.831692    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:11:07.842554    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:11:07.842623    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:11:07.853648    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:11:07.853731    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:11:07.864478    4860 logs.go:276] 0 containers: []
	W0913 12:11:07.864490    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:11:07.864559    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:11:07.875209    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:11:07.875224    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:11:07.875229    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:11:07.914049    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:11:07.914061    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:11:07.929476    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:11:07.929490    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:11:07.942861    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:11:07.942875    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:11:07.965810    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:11:07.965819    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:11:08.004358    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:11:08.004381    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:11:08.028889    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:11:08.028902    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:11:08.048228    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:11:08.048242    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:11:08.052862    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:11:08.052871    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:11:08.066036    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:11:08.066068    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:11:08.077773    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:11:08.077785    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:11:08.089840    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:11:08.089858    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:11:08.114094    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:11:08.114104    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:11:08.129071    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:11:08.129081    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:11:08.140884    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:11:08.140893    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:11:08.152416    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:11:08.152427    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:11:08.163314    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:11:08.163324    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:11:10.678699    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:07.037358    5002 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:11:07.037440    5002 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50476-:22,hostfwd=tcp::50477-:2376,hostname=stopped-upgrade-748000 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/disk.qcow2
	I0913 12:11:07.083988    5002 main.go:141] libmachine: STDOUT: 
	I0913 12:11:07.084017    5002 main.go:141] libmachine: STDERR: 
	I0913 12:11:07.084024    5002 main.go:141] libmachine: Waiting for VM to start (ssh -p 50476 docker@127.0.0.1)...
	I0913 12:11:15.681342    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:15.681523    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:11:15.693717    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:11:15.693832    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:11:15.704416    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:11:15.704504    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:11:15.714948    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:11:15.715032    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:11:15.726196    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:11:15.726279    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:11:15.736839    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:11:15.736922    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:11:15.747993    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:11:15.748071    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:11:15.758739    4860 logs.go:276] 0 containers: []
	W0913 12:11:15.758750    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:11:15.758819    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:11:15.769490    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:11:15.769507    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:11:15.769512    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:11:15.774632    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:11:15.774640    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:11:15.786264    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:11:15.786277    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:11:15.797663    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:11:15.797674    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:11:15.809682    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:11:15.809693    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:11:15.833667    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:11:15.833675    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:11:15.870753    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:11:15.870766    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:11:15.883410    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:11:15.883425    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:11:15.897305    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:11:15.897316    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:11:15.910969    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:11:15.910978    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:11:15.922835    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:11:15.922848    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:11:15.935269    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:11:15.935284    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:11:15.952846    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:11:15.952855    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:11:15.964720    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:11:15.964735    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:11:16.005570    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:11:16.005589    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:11:16.020300    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:11:16.020310    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:11:16.033253    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:11:16.033262    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:11:18.546231    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:23.547836    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:23.548183    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:11:23.577757    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:11:23.577915    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:11:23.595950    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:11:23.596044    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:11:23.608655    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:11:23.608745    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:11:23.620356    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:11:23.620438    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:11:23.631316    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:11:23.631394    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:11:23.641994    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:11:23.642081    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:11:23.652057    4860 logs.go:276] 0 containers: []
	W0913 12:11:23.652066    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:11:23.652129    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:11:23.662642    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:11:23.662660    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:11:23.662666    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:11:23.679847    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:11:23.679858    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:11:23.691002    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:11:23.691013    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:11:23.730809    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:11:23.730817    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:11:23.745039    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:11:23.745049    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:11:23.756969    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:11:23.756978    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:11:23.771954    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:11:23.771966    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:11:23.784080    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:11:23.784091    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:11:23.797765    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:11:23.797780    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:11:23.809205    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:11:23.809217    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:11:23.821013    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:11:23.821023    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:11:23.832351    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:11:23.832361    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:11:23.854908    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:11:23.854915    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:11:23.868419    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:11:23.868430    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:11:23.872840    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:11:23.872847    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:11:23.910106    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:11:23.910117    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:11:23.921935    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:11:23.921948    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:11:26.759509    5002 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/config.json ...
	I0913 12:11:26.760093    5002 machine.go:93] provisionDockerMachine start ...
	I0913 12:11:26.760266    5002 main.go:141] libmachine: Using SSH client type: native
	I0913 12:11:26.760694    5002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd9190] 0x104ddb9d0 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I0913 12:11:26.760706    5002 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 12:11:26.848431    5002 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 12:11:26.848461    5002 buildroot.go:166] provisioning hostname "stopped-upgrade-748000"
	I0913 12:11:26.848617    5002 main.go:141] libmachine: Using SSH client type: native
	I0913 12:11:26.848904    5002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd9190] 0x104ddb9d0 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I0913 12:11:26.848915    5002 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-748000 && echo "stopped-upgrade-748000" | sudo tee /etc/hostname
	I0913 12:11:26.930448    5002 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-748000
	
	I0913 12:11:26.930547    5002 main.go:141] libmachine: Using SSH client type: native
	I0913 12:11:26.930746    5002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd9190] 0x104ddb9d0 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I0913 12:11:26.930768    5002 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-748000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-748000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-748000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 12:11:27.005807    5002 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 12:11:27.005822    5002 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19636-1170/.minikube CaCertPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19636-1170/.minikube}
	I0913 12:11:27.005832    5002 buildroot.go:174] setting up certificates
	I0913 12:11:27.005850    5002 provision.go:84] configureAuth start
	I0913 12:11:27.005855    5002 provision.go:143] copyHostCerts
	I0913 12:11:27.005952    5002 exec_runner.go:144] found /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.pem, removing ...
	I0913 12:11:27.005968    5002 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.pem
	I0913 12:11:27.006115    5002 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.pem (1078 bytes)
	I0913 12:11:27.006325    5002 exec_runner.go:144] found /Users/jenkins/minikube-integration/19636-1170/.minikube/cert.pem, removing ...
	I0913 12:11:27.006331    5002 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19636-1170/.minikube/cert.pem
	I0913 12:11:27.006403    5002 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19636-1170/.minikube/cert.pem (1123 bytes)
	I0913 12:11:27.006574    5002 exec_runner.go:144] found /Users/jenkins/minikube-integration/19636-1170/.minikube/key.pem, removing ...
	I0913 12:11:27.006579    5002 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19636-1170/.minikube/key.pem
	I0913 12:11:27.006649    5002 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19636-1170/.minikube/key.pem (1679 bytes)
	I0913 12:11:27.006755    5002 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-748000 san=[127.0.0.1 localhost minikube stopped-upgrade-748000]
	I0913 12:11:27.127564    5002 provision.go:177] copyRemoteCerts
	I0913 12:11:27.127620    5002 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 12:11:27.127629    5002 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/id_rsa Username:docker}
	I0913 12:11:27.161526    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 12:11:27.168599    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0913 12:11:27.175243    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 12:11:27.181850    5002 provision.go:87] duration metric: took 176.002209ms to configureAuth
	I0913 12:11:27.181862    5002 buildroot.go:189] setting minikube options for container-runtime
	I0913 12:11:27.181957    5002 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:11:27.182003    5002 main.go:141] libmachine: Using SSH client type: native
	I0913 12:11:27.182090    5002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd9190] 0x104ddb9d0 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I0913 12:11:27.182098    5002 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0913 12:11:27.246581    5002 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0913 12:11:27.246590    5002 buildroot.go:70] root file system type: tmpfs
	I0913 12:11:27.246643    5002 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0913 12:11:27.246693    5002 main.go:141] libmachine: Using SSH client type: native
	I0913 12:11:27.246797    5002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd9190] 0x104ddb9d0 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I0913 12:11:27.246832    5002 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0913 12:11:27.314070    5002 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0913 12:11:27.314144    5002 main.go:141] libmachine: Using SSH client type: native
	I0913 12:11:27.314252    5002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd9190] 0x104ddb9d0 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I0913 12:11:27.314262    5002 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0913 12:11:27.680963    5002 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0913 12:11:27.680984    5002 machine.go:96] duration metric: took 920.909875ms to provisionDockerMachine
	I0913 12:11:27.680991    5002 start.go:293] postStartSetup for "stopped-upgrade-748000" (driver="qemu2")
	I0913 12:11:27.680997    5002 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 12:11:27.681059    5002 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 12:11:27.681068    5002 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/id_rsa Username:docker}
	I0913 12:11:27.717028    5002 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 12:11:27.718736    5002 info.go:137] Remote host: Buildroot 2021.02.12
	I0913 12:11:27.718744    5002 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19636-1170/.minikube/addons for local assets ...
	I0913 12:11:27.718838    5002 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19636-1170/.minikube/files for local assets ...
	I0913 12:11:27.718960    5002 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19636-1170/.minikube/files/etc/ssl/certs/16952.pem -> 16952.pem in /etc/ssl/certs
	I0913 12:11:27.719098    5002 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 12:11:27.721962    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/files/etc/ssl/certs/16952.pem --> /etc/ssl/certs/16952.pem (1708 bytes)
	I0913 12:11:27.729072    5002 start.go:296] duration metric: took 48.07825ms for postStartSetup
	I0913 12:11:27.729086    5002 fix.go:56] duration metric: took 20.704838625s for fixHost
	I0913 12:11:27.729125    5002 main.go:141] libmachine: Using SSH client type: native
	I0913 12:11:27.729229    5002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd9190] 0x104ddb9d0 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I0913 12:11:27.729234    5002 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 12:11:27.794642    5002 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726254687.943278879
	
	I0913 12:11:27.794650    5002 fix.go:216] guest clock: 1726254687.943278879
	I0913 12:11:27.794654    5002 fix.go:229] Guest: 2024-09-13 12:11:27.943278879 -0700 PDT Remote: 2024-09-13 12:11:27.729088 -0700 PDT m=+20.823215710 (delta=214.190879ms)
	I0913 12:11:27.794668    5002 fix.go:200] guest clock delta is within tolerance: 214.190879ms
	I0913 12:11:27.794673    5002 start.go:83] releasing machines lock for "stopped-upgrade-748000", held for 20.770434875s
	I0913 12:11:27.794747    5002 ssh_runner.go:195] Run: cat /version.json
	I0913 12:11:27.794756    5002 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/id_rsa Username:docker}
	I0913 12:11:27.794821    5002 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 12:11:27.794859    5002 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/id_rsa Username:docker}
	W0913 12:11:27.829966    5002 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0913 12:11:27.830020    5002 ssh_runner.go:195] Run: systemctl --version
	I0913 12:11:27.871804    5002 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 12:11:27.873521    5002 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 12:11:27.873563    5002 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0913 12:11:27.876982    5002 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0913 12:11:27.881856    5002 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 12:11:27.881866    5002 start.go:495] detecting cgroup driver to use...
	I0913 12:11:27.881957    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 12:11:27.888666    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0913 12:11:27.891582    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0913 12:11:27.894440    5002 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0913 12:11:27.894473    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0913 12:11:27.897658    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 12:11:27.901026    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0913 12:11:27.904279    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 12:11:27.907110    5002 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 12:11:27.910056    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0913 12:11:27.913406    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0913 12:11:27.916581    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0913 12:11:27.919435    5002 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 12:11:27.922048    5002 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 12:11:27.925021    5002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:11:28.003060    5002 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0913 12:11:28.013378    5002 start.go:495] detecting cgroup driver to use...
	I0913 12:11:28.013458    5002 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0913 12:11:28.018446    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 12:11:28.023362    5002 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 12:11:28.029759    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 12:11:28.035025    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 12:11:28.039693    5002 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0913 12:11:28.093380    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 12:11:28.098700    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 12:11:28.104148    5002 ssh_runner.go:195] Run: which cri-dockerd
	I0913 12:11:28.105566    5002 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0913 12:11:28.109009    5002 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0913 12:11:28.114158    5002 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0913 12:11:28.189529    5002 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0913 12:11:28.269282    5002 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0913 12:11:28.269359    5002 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0913 12:11:28.274722    5002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:11:28.351962    5002 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 12:11:29.512228    5002 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.160294084s)
	I0913 12:11:29.512302    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0913 12:11:29.518055    5002 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0913 12:11:29.526495    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 12:11:29.531074    5002 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0913 12:11:29.607290    5002 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0913 12:11:29.690681    5002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:11:29.776509    5002 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0913 12:11:29.782236    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 12:11:29.787301    5002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:11:29.849834    5002 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0913 12:11:29.888256    5002 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0913 12:11:29.888353    5002 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0913 12:11:29.890459    5002 start.go:563] Will wait 60s for crictl version
	I0913 12:11:29.890530    5002 ssh_runner.go:195] Run: which crictl
	I0913 12:11:29.891910    5002 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 12:11:29.906331    5002 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0913 12:11:29.906408    5002 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 12:11:29.921913    5002 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 12:11:26.436918    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:29.942937    5002 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0913 12:11:29.943015    5002 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0913 12:11:29.944204    5002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 12:11:29.947742    5002 kubeadm.go:883] updating cluster {Name:stopped-upgrade-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0913 12:11:29.947795    5002 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0913 12:11:29.947849    5002 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 12:11:29.958453    5002 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 12:11:29.958462    5002 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0913 12:11:29.958516    5002 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 12:11:29.961793    5002 ssh_runner.go:195] Run: which lz4
	I0913 12:11:29.963159    5002 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 12:11:29.964510    5002 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 12:11:29.964519    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0913 12:11:30.899680    5002 docker.go:649] duration metric: took 936.598958ms to copy over tarball
	I0913 12:11:30.899748    5002 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 12:11:31.438795    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:31.438941    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:11:31.451548    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:11:31.451639    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:11:31.463637    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:11:31.463720    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:11:31.478406    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:11:31.478489    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:11:31.490439    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:11:31.490522    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:11:31.504395    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:11:31.504495    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:11:31.520772    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:11:31.520864    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:11:31.532537    4860 logs.go:276] 0 containers: []
	W0913 12:11:31.532549    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:11:31.532623    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:11:31.544306    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:11:31.544324    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:11:31.544330    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:11:31.562256    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:11:31.562268    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:11:31.574706    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:11:31.574717    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:11:31.599707    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:11:31.599721    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:11:31.640765    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:11:31.640782    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:11:31.655122    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:11:31.655137    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:11:31.675553    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:11:31.675573    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:11:31.692212    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:11:31.692229    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:11:31.705696    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:11:31.705712    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:11:31.719177    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:11:31.719194    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:11:31.732947    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:11:31.732959    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:11:31.738343    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:11:31.738354    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:11:31.752429    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:11:31.752446    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:11:31.764963    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:11:31.764976    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:11:31.778508    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:11:31.778522    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:11:31.820538    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:11:31.820560    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:11:31.836992    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:11:31.837006    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:11:34.351891    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:32.055314    5002 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.155596334s)
	I0913 12:11:32.055327    5002 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 12:11:32.070636    5002 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 12:11:32.074817    5002 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0913 12:11:32.080250    5002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:11:32.158381    5002 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 12:11:34.860184    5002 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.701891667s)
	I0913 12:11:34.860288    5002 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 12:11:34.870808    5002 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 12:11:34.870829    5002 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0913 12:11:34.870834    5002 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 12:11:34.875629    5002 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:11:34.877519    5002 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 12:11:34.879527    5002 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:11:34.880364    5002 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 12:11:34.881457    5002 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 12:11:34.881478    5002 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 12:11:34.883439    5002 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 12:11:34.883537    5002 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 12:11:34.884867    5002 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0913 12:11:34.884948    5002 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 12:11:34.886359    5002 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 12:11:34.886406    5002 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0913 12:11:34.887397    5002 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0913 12:11:34.887444    5002 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:11:34.888350    5002 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0913 12:11:34.889190    5002 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:11:35.316987    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0913 12:11:35.322882    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0913 12:11:35.325390    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 12:11:35.332919    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0913 12:11:35.334656    5002 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0913 12:11:35.334679    5002 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 12:11:35.334734    5002 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0913 12:11:35.341960    5002 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0913 12:11:35.341981    5002 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 12:11:35.342049    5002 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0913 12:11:35.356680    5002 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0913 12:11:35.356705    5002 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 12:11:35.356709    5002 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0913 12:11:35.356719    5002 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 12:11:35.356771    5002 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0913 12:11:35.356771    5002 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 12:11:35.362664    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0913 12:11:35.368282    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0913 12:11:35.370402    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0913 12:11:35.379970    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0913 12:11:35.380000    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0913 12:11:35.384160    5002 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0913 12:11:35.384179    5002 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0913 12:11:35.384232    5002 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0913 12:11:35.394333    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0913 12:11:35.394454    5002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0913 12:11:35.396036    5002 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0913 12:11:35.396049    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0913 12:11:35.402701    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0913 12:11:35.415514    5002 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0913 12:11:35.415667    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:11:35.438297    5002 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0913 12:11:35.438321    5002 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:11:35.438330    5002 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0913 12:11:35.438349    5002 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0913 12:11:35.438385    5002 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:11:35.438391    5002 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0913 12:11:35.480340    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0913 12:11:35.480341    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0913 12:11:35.480480    5002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0913 12:11:35.480480    5002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0913 12:11:35.494454    5002 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0913 12:11:35.494486    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0913 12:11:35.513123    5002 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0913 12:11:35.513155    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0913 12:11:35.558294    5002 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0913 12:11:35.558308    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0913 12:11:35.643343    5002 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0913 12:11:35.643378    5002 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0913 12:11:35.643386    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0913 12:11:35.703987    5002 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0913 12:11:35.704120    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:11:35.745961    5002 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0913 12:11:35.745982    5002 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0913 12:11:35.745987    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0913 12:11:35.746017    5002 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0913 12:11:35.746035    5002 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:11:35.746099    5002 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:11:35.927084    5002 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0913 12:11:35.927119    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0913 12:11:35.927262    5002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0913 12:11:35.928597    5002 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0913 12:11:35.928609    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0913 12:11:35.956719    5002 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0913 12:11:35.956733    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0913 12:11:36.192027    5002 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0913 12:11:36.192068    5002 cache_images.go:92] duration metric: took 1.321279833s to LoadCachedImages
	W0913 12:11:36.192100    5002 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0913 12:11:36.192106    5002 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0913 12:11:36.192160    5002 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-748000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 12:11:36.192252    5002 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0913 12:11:36.209948    5002 cni.go:84] Creating CNI manager for ""
	I0913 12:11:36.209967    5002 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:11:36.209973    5002 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 12:11:36.209982    5002 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-748000 NodeName:stopped-upgrade-748000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 12:11:36.210050    5002 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-748000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 12:11:36.210487    5002 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0913 12:11:36.213382    5002 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 12:11:36.213418    5002 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 12:11:36.217251    5002 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0913 12:11:36.222067    5002 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 12:11:36.227168    5002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0913 12:11:36.232568    5002 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0913 12:11:36.234070    5002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 12:11:36.237717    5002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:11:36.322586    5002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 12:11:36.332231    5002 certs.go:68] Setting up /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000 for IP: 10.0.2.15
	I0913 12:11:36.332245    5002 certs.go:194] generating shared ca certs ...
	I0913 12:11:36.332254    5002 certs.go:226] acquiring lock for ca certs: {Name:mka395184640c64d3892ae138bcca34b27eb400d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:11:36.332433    5002 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.key
	I0913 12:11:36.332485    5002 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.key
	I0913 12:11:36.332493    5002 certs.go:256] generating profile certs ...
	I0913 12:11:36.332569    5002 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/client.key
	I0913 12:11:36.332590    5002 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.key.1f099c47
	I0913 12:11:36.332600    5002 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.crt.1f099c47 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0913 12:11:36.375188    5002 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.crt.1f099c47 ...
	I0913 12:11:36.375203    5002 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.crt.1f099c47: {Name:mke754fdfe22cc0e0729d44e40da898b602d46bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:11:36.375719    5002 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.key.1f099c47 ...
	I0913 12:11:36.375727    5002 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.key.1f099c47: {Name:mk0d9dc37fb392f3d1ec39b7fcf3349303ce4783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:11:36.375882    5002 certs.go:381] copying /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.crt.1f099c47 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.crt
	I0913 12:11:36.376046    5002 certs.go:385] copying /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.key.1f099c47 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.key
	I0913 12:11:36.376200    5002 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/proxy-client.key
	I0913 12:11:36.376333    5002 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/1695.pem (1338 bytes)
	W0913 12:11:36.376366    5002 certs.go:480] ignoring /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/1695_empty.pem, impossibly tiny 0 bytes
	I0913 12:11:36.376372    5002 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 12:11:36.376391    5002 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem (1078 bytes)
	I0913 12:11:36.376415    5002 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem (1123 bytes)
	I0913 12:11:36.376436    5002 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/key.pem (1679 bytes)
	I0913 12:11:36.376478    5002 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/files/etc/ssl/certs/16952.pem (1708 bytes)
	I0913 12:11:36.376806    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 12:11:36.383777    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 12:11:36.390959    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 12:11:36.397610    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 12:11:36.405084    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 12:11:36.412424    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 12:11:36.419494    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 12:11:36.426116    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 12:11:36.433084    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/1695.pem --> /usr/share/ca-certificates/1695.pem (1338 bytes)
	I0913 12:11:36.440306    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/files/etc/ssl/certs/16952.pem --> /usr/share/ca-certificates/16952.pem (1708 bytes)
	I0913 12:11:36.446823    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 12:11:36.453299    5002 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 12:11:36.458494    5002 ssh_runner.go:195] Run: openssl version
	I0913 12:11:36.460386    5002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16952.pem && ln -fs /usr/share/ca-certificates/16952.pem /etc/ssl/certs/16952.pem"
	I0913 12:11:36.463232    5002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16952.pem
	I0913 12:11:36.464547    5002 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:36 /usr/share/ca-certificates/16952.pem
	I0913 12:11:36.464574    5002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16952.pem
	I0913 12:11:36.466348    5002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16952.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 12:11:36.469474    5002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 12:11:36.472647    5002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 12:11:36.474179    5002 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:21 /usr/share/ca-certificates/minikubeCA.pem
	I0913 12:11:36.474252    5002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 12:11:36.476292    5002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 12:11:36.479487    5002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1695.pem && ln -fs /usr/share/ca-certificates/1695.pem /etc/ssl/certs/1695.pem"
	I0913 12:11:36.482286    5002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1695.pem
	I0913 12:11:36.483816    5002 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:36 /usr/share/ca-certificates/1695.pem
	I0913 12:11:36.483839    5002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1695.pem
	I0913 12:11:36.485610    5002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1695.pem /etc/ssl/certs/51391683.0"
	I0913 12:11:36.488955    5002 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 12:11:36.490395    5002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 12:11:36.492300    5002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 12:11:36.494159    5002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 12:11:36.496380    5002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 12:11:36.498373    5002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 12:11:36.500220    5002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 12:11:36.502234    5002 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 12:11:36.502306    5002 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 12:11:36.512965    5002 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 12:11:36.516076    5002 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 12:11:36.516086    5002 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 12:11:36.516110    5002 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 12:11:36.518878    5002 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 12:11:36.519175    5002 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-748000" does not appear in /Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:11:36.519268    5002 kubeconfig.go:62] /Users/jenkins/minikube-integration/19636-1170/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-748000" cluster setting kubeconfig missing "stopped-upgrade-748000" context setting]
	I0913 12:11:36.519480    5002 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/kubeconfig: {Name:mk70034871f305cb9ef95a7630262c04e6c4f7f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:11:36.519887    5002 kapi.go:59] client config for stopped-upgrade-748000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/client.key", CAFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063b1540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 12:11:36.520218    5002 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 12:11:36.522799    5002 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-748000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0913 12:11:36.522808    5002 kubeadm.go:1160] stopping kube-system containers ...
	I0913 12:11:36.522855    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 12:11:36.533034    5002 docker.go:483] Stopping containers: [5f8e3aa0e56c 72ed56d5e8b8 1a47681bea37 813eda68f74d a25d0b8881b1 a97dac85d1aa ece5ce1f1212 95ac3fc8a10e]
	I0913 12:11:36.533106    5002 ssh_runner.go:195] Run: docker stop 5f8e3aa0e56c 72ed56d5e8b8 1a47681bea37 813eda68f74d a25d0b8881b1 a97dac85d1aa ece5ce1f1212 95ac3fc8a10e
	I0913 12:11:36.544172    5002 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 12:11:36.549687    5002 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 12:11:36.552945    5002 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 12:11:36.552953    5002 kubeadm.go:157] found existing configuration files:
	
	I0913 12:11:36.552981    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf
	I0913 12:11:36.555791    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 12:11:36.555816    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 12:11:36.558195    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf
	I0913 12:11:36.561089    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 12:11:36.561112    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 12:11:36.564038    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf
	I0913 12:11:36.566427    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 12:11:36.566460    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 12:11:36.569331    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf
	I0913 12:11:36.572155    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 12:11:36.572176    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 12:11:36.574961    5002 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 12:11:36.577736    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 12:11:36.600379    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 12:11:39.354033    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:39.354591    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:11:39.392491    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:11:39.392688    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:11:39.413070    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:11:39.413179    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:11:39.431704    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:11:39.431798    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:11:39.444521    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:11:39.444606    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:11:39.455108    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:11:39.455188    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:11:39.465681    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:11:39.465768    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:11:39.476997    4860 logs.go:276] 0 containers: []
	W0913 12:11:39.477007    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:11:39.477085    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:11:39.488381    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:11:39.488398    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:11:39.488404    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:11:39.526645    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:11:39.526657    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:11:39.539016    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:11:39.539028    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:11:39.558328    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:11:39.558340    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:11:39.562864    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:11:39.562874    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:11:39.575321    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:11:39.575335    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:11:39.590539    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:11:39.590555    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:11:39.603040    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:11:39.603055    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:11:39.615822    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:11:39.615833    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:11:39.641074    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:11:39.641087    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:11:39.657950    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:11:39.657961    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:11:39.677543    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:11:39.677555    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:11:39.689848    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:11:39.689862    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:11:39.702627    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:11:39.702638    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:11:39.744624    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:11:39.744640    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:11:39.761111    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:11:39.761124    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:11:39.773452    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:11:39.773464    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:11:37.048792    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 12:11:37.175493    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 12:11:37.198285    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 12:11:37.224868    5002 api_server.go:52] waiting for apiserver process to appear ...
	I0913 12:11:37.224954    5002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 12:11:37.727081    5002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 12:11:38.226994    5002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 12:11:38.232769    5002 api_server.go:72] duration metric: took 1.0079415s to wait for apiserver process to appear ...
	I0913 12:11:38.232778    5002 api_server.go:88] waiting for apiserver healthz status ...
	I0913 12:11:38.232788    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:42.288055    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:43.234668    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:43.234697    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:47.290123    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:47.290388    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:11:47.315897    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:11:47.316009    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:11:47.330567    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:11:47.330680    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:11:47.343102    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:11:47.343188    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:11:47.354813    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:11:47.354908    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:11:47.365351    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:11:47.365433    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:11:47.376732    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:11:47.376813    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:11:47.390780    4860 logs.go:276] 0 containers: []
	W0913 12:11:47.390793    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:11:47.390858    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:11:47.401377    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:11:47.401394    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:11:47.401399    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:11:47.413701    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:11:47.413712    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:11:47.427942    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:11:47.427955    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:11:47.439482    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:11:47.439494    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:11:47.455120    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:11:47.455131    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:11:47.471500    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:11:47.471512    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:11:47.483697    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:11:47.483710    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:11:47.506511    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:11:47.506525    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:11:47.519536    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:11:47.519547    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:11:47.542978    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:11:47.542990    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:11:47.577113    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:11:47.577124    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:11:47.602640    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:11:47.602654    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:11:47.614010    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:11:47.614021    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:11:47.625828    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:11:47.625840    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:11:47.663989    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:11:47.663997    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:11:47.668101    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:11:47.668106    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:11:47.681697    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:11:47.681706    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:11:50.195081    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:48.234731    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:48.234786    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:55.195318    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:55.195422    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:11:55.206881    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:11:55.206967    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:11:55.217761    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:11:55.217839    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:11:55.228214    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:11:55.228300    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:11:55.239620    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:11:55.239709    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:11:55.256164    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:11:55.256254    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:11:55.268532    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:11:55.268618    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:11:55.281268    4860 logs.go:276] 0 containers: []
	W0913 12:11:55.281281    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:11:55.281357    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:11:55.294842    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:11:55.294863    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:11:55.294869    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:11:55.306832    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:11:55.306848    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:11:55.347841    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:11:55.347855    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:11:55.359973    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:11:55.359984    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:11:55.371980    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:11:55.371994    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:11:55.390162    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:11:55.390174    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:11:55.403754    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:11:55.403768    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:11:55.425306    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:11:55.425317    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:11:55.440909    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:11:55.440918    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:11:55.452455    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:11:55.452470    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:11:55.463905    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:11:55.463920    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:11:55.487936    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:11:55.487948    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:11:55.499283    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:11:55.499294    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:11:55.511363    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:11:55.511375    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:11:55.525164    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:11:55.525173    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:11:55.542577    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:11:55.542590    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:11:55.547476    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:11:55.547488    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:11:53.235043    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:53.235086    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:58.087466    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:58.235394    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:58.235428    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:03.089728    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:03.090369    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:12:03.129455    4860 logs.go:276] 2 containers: [521bcdd33a54 9f7f4433c63e]
	I0913 12:12:03.129619    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:12:03.151695    4860 logs.go:276] 2 containers: [8b32910810ea c2c2a4ed7713]
	I0913 12:12:03.151815    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:12:03.167020    4860 logs.go:276] 1 containers: [740c3a5bf236]
	I0913 12:12:03.167119    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:12:03.179595    4860 logs.go:276] 2 containers: [0e119795880e cfbb35f2a5e2]
	I0913 12:12:03.179674    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:12:03.190480    4860 logs.go:276] 1 containers: [fe98fe3ee60d]
	I0913 12:12:03.190551    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:12:03.201140    4860 logs.go:276] 2 containers: [44737d8e70b5 c0a704046504]
	I0913 12:12:03.201210    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:12:03.226663    4860 logs.go:276] 0 containers: []
	W0913 12:12:03.226679    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:12:03.226755    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:12:03.238773    4860 logs.go:276] 2 containers: [3ee740354b86 7d5b5cfba187]
	I0913 12:12:03.238787    4860 logs.go:123] Gathering logs for kube-controller-manager [c0a704046504] ...
	I0913 12:12:03.238792    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0a704046504"
	I0913 12:12:03.253221    4860 logs.go:123] Gathering logs for storage-provisioner [3ee740354b86] ...
	I0913 12:12:03.253230    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ee740354b86"
	I0913 12:12:03.264917    4860 logs.go:123] Gathering logs for storage-provisioner [7d5b5cfba187] ...
	I0913 12:12:03.264927    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d5b5cfba187"
	I0913 12:12:03.276670    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:12:03.276681    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:12:03.299528    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:12:03.299536    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:12:03.337503    4860 logs.go:123] Gathering logs for kube-apiserver [9f7f4433c63e] ...
	I0913 12:12:03.337514    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f7f4433c63e"
	I0913 12:12:03.350178    4860 logs.go:123] Gathering logs for etcd [8b32910810ea] ...
	I0913 12:12:03.350190    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b32910810ea"
	I0913 12:12:03.364025    4860 logs.go:123] Gathering logs for coredns [740c3a5bf236] ...
	I0913 12:12:03.364034    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 740c3a5bf236"
	I0913 12:12:03.375254    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:12:03.375264    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:12:03.388032    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:12:03.388045    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:12:03.392687    4860 logs.go:123] Gathering logs for kube-apiserver [521bcdd33a54] ...
	I0913 12:12:03.392695    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 521bcdd33a54"
	I0913 12:12:03.406473    4860 logs.go:123] Gathering logs for kube-scheduler [0e119795880e] ...
	I0913 12:12:03.406484    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e119795880e"
	I0913 12:12:03.418413    4860 logs.go:123] Gathering logs for kube-scheduler [cfbb35f2a5e2] ...
	I0913 12:12:03.418425    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfbb35f2a5e2"
	I0913 12:12:03.430145    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:12:03.430157    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:12:03.464343    4860 logs.go:123] Gathering logs for etcd [c2c2a4ed7713] ...
	I0913 12:12:03.464352    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c2a4ed7713"
	I0913 12:12:03.480003    4860 logs.go:123] Gathering logs for kube-proxy [fe98fe3ee60d] ...
	I0913 12:12:03.480017    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe98fe3ee60d"
	I0913 12:12:03.492115    4860 logs.go:123] Gathering logs for kube-controller-manager [44737d8e70b5] ...
	I0913 12:12:03.492124    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44737d8e70b5"
	I0913 12:12:06.014254    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:03.235903    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:03.235935    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:11.016717    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:11.016804    4860 kubeadm.go:597] duration metric: took 4m4.509684s to restartPrimaryControlPlane
	W0913 12:12:11.016855    4860 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 12:12:11.016874    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0913 12:12:08.236459    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:08.236482    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:12.022804    4860 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.005959s)
	I0913 12:12:12.022903    4860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 12:12:12.027661    4860 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 12:12:12.030439    4860 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 12:12:12.033043    4860 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 12:12:12.033049    4860 kubeadm.go:157] found existing configuration files:
	
	I0913 12:12:12.033070    4860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/admin.conf
	I0913 12:12:12.036199    4860 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 12:12:12.036221    4860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 12:12:12.039737    4860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/kubelet.conf
	I0913 12:12:12.042183    4860 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 12:12:12.042204    4860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 12:12:12.044896    4860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/controller-manager.conf
	I0913 12:12:12.047770    4860 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 12:12:12.047798    4860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 12:12:12.050346    4860 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/scheduler.conf
	I0913 12:12:12.052833    4860 kubeadm.go:163] "https://control-plane.minikube.internal:50300" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50300 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 12:12:12.052853    4860 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 12:12:12.055815    4860 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 12:12:12.074507    4860 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0913 12:12:12.074641    4860 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 12:12:12.123551    4860 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 12:12:12.123612    4860 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 12:12:12.123669    4860 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 12:12:12.176774    4860 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 12:12:12.179945    4860 out.go:235]   - Generating certificates and keys ...
	I0913 12:12:12.179978    4860 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 12:12:12.180011    4860 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 12:12:12.180049    4860 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 12:12:12.180101    4860 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 12:12:12.180139    4860 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 12:12:12.180184    4860 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 12:12:12.180237    4860 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 12:12:12.180280    4860 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 12:12:12.180326    4860 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 12:12:12.180369    4860 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 12:12:12.180395    4860 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 12:12:12.180421    4860 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 12:12:12.327151    4860 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 12:12:12.394420    4860 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 12:12:12.656154    4860 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 12:12:12.759542    4860 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 12:12:12.786910    4860 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 12:12:12.787285    4860 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 12:12:12.787306    4860 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 12:12:12.877259    4860 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 12:12:12.881282    4860 out.go:235]   - Booting up control plane ...
	I0913 12:12:12.881335    4860 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 12:12:12.881377    4860 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 12:12:12.881408    4860 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 12:12:12.881467    4860 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 12:12:12.881582    4860 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 12:12:13.237249    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:13.237270    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:17.382308    4860 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503421 seconds
	I0913 12:12:17.382372    4860 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 12:12:17.386602    4860 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 12:12:17.903487    4860 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 12:12:17.903902    4860 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-383000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 12:12:18.407221    4860 kubeadm.go:310] [bootstrap-token] Using token: dqur8d.53xl1lhmd8qyl1lx
	I0913 12:12:18.409844    4860 out.go:235]   - Configuring RBAC rules ...
	I0913 12:12:18.409909    4860 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 12:12:18.409959    4860 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 12:12:18.411758    4860 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 12:12:18.413380    4860 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 12:12:18.414302    4860 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 12:12:18.415268    4860 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 12:12:18.418396    4860 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 12:12:18.577341    4860 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 12:12:18.810682    4860 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 12:12:18.811316    4860 kubeadm.go:310] 
	I0913 12:12:18.811356    4860 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 12:12:18.811365    4860 kubeadm.go:310] 
	I0913 12:12:18.811408    4860 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 12:12:18.811417    4860 kubeadm.go:310] 
	I0913 12:12:18.811432    4860 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 12:12:18.811469    4860 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 12:12:18.811506    4860 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 12:12:18.811511    4860 kubeadm.go:310] 
	I0913 12:12:18.811547    4860 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 12:12:18.811551    4860 kubeadm.go:310] 
	I0913 12:12:18.811591    4860 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 12:12:18.811596    4860 kubeadm.go:310] 
	I0913 12:12:18.811623    4860 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 12:12:18.811667    4860 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 12:12:18.811850    4860 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 12:12:18.811857    4860 kubeadm.go:310] 
	I0913 12:12:18.811922    4860 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 12:12:18.812092    4860 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 12:12:18.812102    4860 kubeadm.go:310] 
	I0913 12:12:18.812173    4860 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dqur8d.53xl1lhmd8qyl1lx \
	I0913 12:12:18.812262    4860 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de4133d636792fabd6bfbc110ddb1dc6f2d65eb5bd69b961dc2a84dacbe86065 \
	I0913 12:12:18.812284    4860 kubeadm.go:310] 	--control-plane 
	I0913 12:12:18.812286    4860 kubeadm.go:310] 
	I0913 12:12:18.812358    4860 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 12:12:18.812361    4860 kubeadm.go:310] 
	I0913 12:12:18.812428    4860 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dqur8d.53xl1lhmd8qyl1lx \
	I0913 12:12:18.812517    4860 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de4133d636792fabd6bfbc110ddb1dc6f2d65eb5bd69b961dc2a84dacbe86065 
	I0913 12:12:18.812606    4860 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 12:12:18.812611    4860 cni.go:84] Creating CNI manager for ""
	I0913 12:12:18.812622    4860 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:12:18.814513    4860 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 12:12:18.822318    4860 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 12:12:18.825612    4860 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 12:12:18.830392    4860 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 12:12:18.830442    4860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 12:12:18.830461    4860 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-383000 minikube.k8s.io/updated_at=2024_09_13T12_12_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=running-upgrade-383000 minikube.k8s.io/primary=true
	I0913 12:12:18.836814    4860 ops.go:34] apiserver oom_adj: -16
	I0913 12:12:18.862001    4860 kubeadm.go:1113] duration metric: took 31.600292ms to wait for elevateKubeSystemPrivileges
	I0913 12:12:18.872127    4860 kubeadm.go:394] duration metric: took 4m12.380592542s to StartCluster
	I0913 12:12:18.872144    4860 settings.go:142] acquiring lock: {Name:mk30414fb8bdc9357b580933d1c04157a3bd6358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:12:18.872237    4860 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:12:18.872621    4860 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/kubeconfig: {Name:mk70034871f305cb9ef95a7630262c04e6c4f7f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:12:18.872823    4860 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:12:18.872913    4860 config.go:182] Loaded profile config "running-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:12:18.872953    4860 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 12:12:18.872986    4860 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-383000"
	I0913 12:12:18.872990    4860 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-383000"
	I0913 12:12:18.872995    4860 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-383000"
	I0913 12:12:18.872996    4860 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-383000"
	W0913 12:12:18.872998    4860 addons.go:243] addon storage-provisioner should already be in state true
	I0913 12:12:18.873009    4860 host.go:66] Checking if "running-upgrade-383000" exists ...
	I0913 12:12:18.874066    4860 kapi.go:59] client config for running-upgrade-383000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/running-upgrade-383000/client.key", CAFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040bd540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 12:12:18.874195    4860 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-383000"
	W0913 12:12:18.874200    4860 addons.go:243] addon default-storageclass should already be in state true
	I0913 12:12:18.874208    4860 host.go:66] Checking if "running-upgrade-383000" exists ...
	I0913 12:12:18.877144    4860 out.go:177] * Verifying Kubernetes components...
	I0913 12:12:18.877490    4860 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 12:12:18.881397    4860 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 12:12:18.881405    4860 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/running-upgrade-383000/id_rsa Username:docker}
	I0913 12:12:18.885063    4860 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:12:18.889164    4860 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:12:18.893210    4860 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 12:12:18.893216    4860 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 12:12:18.893222    4860 sshutil.go:53] new ssh client: &{IP:localhost Port:50268 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/running-upgrade-383000/id_rsa Username:docker}
	I0913 12:12:18.978440    4860 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 12:12:18.983447    4860 api_server.go:52] waiting for apiserver process to appear ...
	I0913 12:12:18.983499    4860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 12:12:18.988558    4860 api_server.go:72] duration metric: took 115.728834ms to wait for apiserver process to appear ...
	I0913 12:12:18.988566    4860 api_server.go:88] waiting for apiserver healthz status ...
	I0913 12:12:18.988573    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:18.992539    4860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 12:12:19.012718    4860 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 12:12:19.330975    4860 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0913 12:12:19.330989    4860 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0913 12:12:18.238253    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:18.238286    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:23.989746    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:23.989794    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:23.239826    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:23.239946    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:28.990272    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:28.990304    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:28.242385    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:28.242433    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:33.990374    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:33.990403    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:33.244572    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:33.244594    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:38.990558    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:38.990619    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:38.246197    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:38.246431    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:12:38.263637    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:12:38.263743    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:12:38.276768    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:12:38.276864    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:12:38.287983    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:12:38.288069    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:12:38.298292    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:12:38.298374    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:12:38.309068    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:12:38.309155    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:12:38.319305    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:12:38.319384    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:12:38.329754    5002 logs.go:276] 0 containers: []
	W0913 12:12:38.329765    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:12:38.329835    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:12:38.340454    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:12:38.340471    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:12:38.340476    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:12:38.353084    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:12:38.353101    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:12:38.364570    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:12:38.364580    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:12:38.379687    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:12:38.379701    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:12:38.397549    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:12:38.397559    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:12:38.476640    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:12:38.476651    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:12:38.490683    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:12:38.490694    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:12:38.502468    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:12:38.502483    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:12:38.518494    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:12:38.518504    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:12:38.545429    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:12:38.545436    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:12:38.584309    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:12:38.584320    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:12:38.588531    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:12:38.588538    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:12:38.600725    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:12:38.600736    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:12:38.614761    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:12:38.614772    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:12:38.626132    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:12:38.626144    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:12:38.637691    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:12:38.637702    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:12:38.683989    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:12:38.684008    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:12:41.198068    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:43.990856    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:43.990916    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:46.200097    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:46.200345    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:12:46.220904    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:12:46.221024    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:12:46.236090    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:12:46.236188    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:12:46.248589    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:12:46.248668    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:12:46.263700    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:12:46.263782    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:12:46.274644    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:12:46.274731    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:12:46.285805    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:12:46.285898    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:12:46.295552    5002 logs.go:276] 0 containers: []
	W0913 12:12:46.295563    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:12:46.295634    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:12:46.306052    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:12:46.306070    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:12:46.306075    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:12:46.342950    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:12:46.342963    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:12:46.357144    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:12:46.357154    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:12:46.372632    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:12:46.372643    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:12:46.411291    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:12:46.411303    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:12:46.423385    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:12:46.423399    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:12:46.437181    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:12:46.437191    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:12:46.448759    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:12:46.448769    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:12:46.465748    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:12:46.465758    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:12:46.491264    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:12:46.491274    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:12:46.502857    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:12:46.502869    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:12:46.541238    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:12:46.541245    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:12:46.545746    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:12:46.545754    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:12:46.560818    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:12:46.560828    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:12:46.572048    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:12:46.572058    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:12:46.583909    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:12:46.583920    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:12:46.596715    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:12:46.596725    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:12:48.991410    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:48.991456    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0913 12:12:49.332241    4860 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0913 12:12:49.336346    4860 out.go:177] * Enabled addons: storage-provisioner
	I0913 12:12:49.344483    4860 addons.go:510] duration metric: took 30.472845084s for enable addons: enabled=[storage-provisioner]
	I0913 12:12:49.109821    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:53.992217    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:53.992355    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:54.112027    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:54.112424    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:12:54.140976    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:12:54.141112    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:12:54.163177    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:12:54.163287    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:12:54.176175    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:12:54.176265    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:12:54.187528    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:12:54.187616    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:12:54.202824    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:12:54.202905    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:12:54.213565    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:12:54.213641    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:12:54.224193    5002 logs.go:276] 0 containers: []
	W0913 12:12:54.224202    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:12:54.224265    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:12:54.234276    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:12:54.234295    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:12:54.234301    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:12:54.273282    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:12:54.273291    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:12:54.287592    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:12:54.287602    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:12:54.307090    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:12:54.307098    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:12:54.318501    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:12:54.318511    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:12:54.343570    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:12:54.343578    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:12:54.355741    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:12:54.355755    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:12:54.393384    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:12:54.393397    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:12:54.405560    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:12:54.405573    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:12:54.417670    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:12:54.417685    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:12:54.431923    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:12:54.431933    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:12:54.443071    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:12:54.443080    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:12:54.455055    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:12:54.455069    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:12:54.459626    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:12:54.459637    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:12:54.498124    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:12:54.498135    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:12:54.513306    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:12:54.513316    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:12:54.530587    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:12:54.530597    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:12:58.993459    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:58.993501    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:57.043571    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:03.994735    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:03.994776    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:02.045620    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:02.045854    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:02.069938    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:02.070074    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:02.088276    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:02.088376    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:02.100682    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:02.100765    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:02.111576    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:02.111655    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:02.122121    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:02.122204    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:02.132677    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:02.132754    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:02.143213    5002 logs.go:276] 0 containers: []
	W0913 12:13:02.143227    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:02.143298    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:02.154979    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:02.154995    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:02.155000    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:02.169758    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:02.169771    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:02.187117    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:02.187127    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:02.201905    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:02.201917    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:02.214031    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:02.214044    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:02.228744    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:02.228754    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:02.254467    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:02.254476    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:02.270903    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:02.270913    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:02.308521    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:02.308532    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:02.320445    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:02.320457    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:02.331953    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:02.331966    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:02.347588    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:02.347600    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:02.387190    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:02.387197    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:02.424391    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:02.424404    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:02.436606    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:02.436618    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:13:02.449004    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:02.449019    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:02.453564    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:02.453571    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:04.967323    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:08.996802    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:08.996824    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:09.969577    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:09.969865    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:09.994319    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:09.994436    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:10.011145    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:10.011240    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:10.024906    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:10.025000    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:10.037008    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:10.037102    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:10.053179    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:10.053267    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:10.064231    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:10.064323    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:10.075630    5002 logs.go:276] 0 containers: []
	W0913 12:13:10.075641    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:10.075714    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:10.086571    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:10.086587    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:10.086594    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:10.126961    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:10.126970    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:10.140902    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:10.140913    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:10.155049    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:10.155060    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:10.166799    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:10.166814    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:13:10.178768    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:10.178779    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:10.182915    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:10.182925    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:10.217088    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:10.217099    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:10.232385    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:10.232397    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:10.248641    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:10.248656    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:10.264253    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:10.264269    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:10.276198    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:10.276208    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:10.288272    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:10.288282    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:10.326869    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:10.326880    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:10.338805    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:10.338818    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:10.357303    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:10.357314    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:10.369090    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:10.369101    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:13.998852    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:13.998906    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:12.896543    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:19.000990    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:19.001102    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:19.011659    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:13:19.011744    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:19.022367    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:13:19.022446    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:19.032809    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:13:19.032887    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:19.043448    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:13:19.043524    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:19.053786    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:13:19.053871    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:19.063534    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:13:19.063610    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:19.079071    4860 logs.go:276] 0 containers: []
	W0913 12:13:19.079088    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:19.079160    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:19.089613    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:13:19.089629    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:19.089634    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:19.122256    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:19.122262    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:19.161289    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:13:19.161298    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:13:19.176571    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:13:19.176581    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:13:19.191002    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:13:19.191012    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:13:19.204107    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:13:19.204119    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:13:19.215719    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:13:19.215733    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:13:19.238467    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:13:19.238477    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:13:19.250029    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:13:19.250038    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:19.261979    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:19.261992    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:19.266957    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:13:19.266965    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:13:19.281973    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:13:19.281987    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:13:19.293178    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:19.293188    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:17.898743    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:17.898971    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:17.924480    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:17.924575    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:17.937298    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:17.937389    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:17.947841    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:17.947915    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:17.957826    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:17.957905    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:17.974904    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:17.974983    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:17.985908    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:17.985991    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:17.996411    5002 logs.go:276] 0 containers: []
	W0913 12:13:17.996425    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:17.996494    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:18.006862    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:18.006881    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:18.006886    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:18.018771    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:18.018786    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:18.032708    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:18.032719    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:18.071175    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:18.071188    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:18.083405    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:18.083418    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:18.095205    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:18.095217    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:18.106765    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:18.106775    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:18.117407    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:18.117418    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:18.141020    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:18.141027    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:18.176390    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:18.176400    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:13:18.188659    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:18.188668    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:18.227083    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:18.227093    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:18.231175    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:18.231183    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:18.246300    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:18.246309    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:18.269471    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:18.269486    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:18.283299    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:18.283313    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:18.294360    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:18.294375    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:20.812014    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:21.818456    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:25.814495    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:25.814792    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:25.844266    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:25.844416    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:25.862701    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:25.862805    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:25.875648    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:25.875725    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:25.891641    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:25.891729    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:25.907700    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:25.907781    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:25.917883    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:25.917956    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:25.928346    5002 logs.go:276] 0 containers: []
	W0913 12:13:25.928362    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:25.928435    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:25.939242    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:25.939262    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:25.939267    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:25.952746    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:25.952756    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:25.967269    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:25.967280    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:25.978544    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:25.978555    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:25.997546    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:25.997556    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:26.009823    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:26.009836    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:26.022591    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:26.022602    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:26.028171    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:26.028185    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:26.065803    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:26.065813    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:26.077664    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:26.077674    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:26.093350    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:26.093361    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:26.119423    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:26.119435    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:26.131439    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:26.131450    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:26.169402    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:26.169414    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:26.183685    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:26.183694    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:26.195604    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:26.195615    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:13:26.208087    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:26.208097    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:26.820896    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:26.821060    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:26.835292    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:13:26.835383    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:26.846974    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:13:26.847055    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:26.857926    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:13:26.858002    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:26.869176    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:13:26.869248    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:26.879482    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:13:26.879550    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:26.889771    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:13:26.889839    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:26.899991    4860 logs.go:276] 0 containers: []
	W0913 12:13:26.900004    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:26.900069    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:26.912886    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:13:26.912900    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:26.912906    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:26.948801    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:13:26.948813    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:13:26.960783    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:13:26.960797    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:13:26.977777    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:26.977787    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:27.001149    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:13:27.001161    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:13:27.016650    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:13:27.016662    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:13:27.028389    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:13:27.028405    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:13:27.040463    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:27.040474    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:27.074743    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:27.074757    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:27.079547    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:13:27.079555    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:13:27.093983    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:13:27.093994    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:13:27.107384    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:13:27.107397    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:13:27.118759    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:13:27.118772    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:29.636783    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:28.748416    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:34.639193    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:34.639420    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:34.666385    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:13:34.666494    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:34.688730    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:13:34.688809    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:34.699432    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:13:34.699517    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:34.710024    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:13:34.710108    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:34.720285    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:13:34.720368    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:34.736616    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:13:34.736699    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:34.750703    4860 logs.go:276] 0 containers: []
	W0913 12:13:34.750719    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:34.750791    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:34.760742    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:13:34.760757    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:13:34.760763    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:13:34.775330    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:13:34.775341    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:13:34.793355    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:34.793365    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:34.816770    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:34.816778    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:34.820980    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:34.820986    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:34.859154    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:13:34.859166    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:13:34.871023    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:13:34.871034    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:13:34.882821    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:13:34.882831    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:13:34.898087    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:13:34.898098    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:13:34.910151    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:13:34.910163    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:13:34.921812    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:13:34.921823    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:34.933362    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:34.933372    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:34.966256    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:13:34.966264    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:13:33.749991    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:33.750506    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:33.780570    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:33.780731    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:33.799843    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:33.799940    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:33.813587    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:33.813683    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:33.825271    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:33.825362    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:33.846256    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:33.846332    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:33.857979    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:33.858067    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:33.869418    5002 logs.go:276] 0 containers: []
	W0913 12:13:33.869429    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:33.869502    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:33.880159    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:33.880177    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:33.880183    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:33.904191    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:33.904202    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:33.938350    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:33.938361    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:33.952660    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:33.952674    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:33.965056    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:33.965068    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:33.977200    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:33.977214    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:33.996714    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:33.996725    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:34.008893    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:34.008904    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:34.047565    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:34.047575    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:34.052098    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:34.052105    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:34.063889    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:34.063900    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:34.079503    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:34.079513    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:34.117684    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:34.117698    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:34.132423    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:34.132434    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:13:34.144871    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:34.144882    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:34.159007    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:34.159017    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:34.170871    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:34.170884    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:36.683907    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:37.481735    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:41.686032    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:41.686191    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:41.699435    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:41.699509    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:41.710180    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:41.710249    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:41.720378    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:41.720447    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:41.730687    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:41.730777    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:41.746335    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:41.746417    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:41.757245    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:41.757321    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:41.767857    5002 logs.go:276] 0 containers: []
	W0913 12:13:41.767868    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:41.767943    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:41.778432    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:41.778449    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:41.778455    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:41.815472    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:41.815481    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:41.829868    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:41.829878    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:41.844451    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:41.844466    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:41.856097    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:41.856107    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:41.874067    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:41.874082    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:41.885815    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:41.885827    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:41.890087    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:41.890096    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:41.924714    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:41.924724    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:42.483817    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:42.484015    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:42.498307    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:13:42.498404    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:42.512420    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:13:42.512496    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:42.522944    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:13:42.523022    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:42.533438    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:13:42.533524    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:42.543473    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:13:42.543554    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:42.553850    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:13:42.553931    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:42.564226    4860 logs.go:276] 0 containers: []
	W0913 12:13:42.564238    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:42.564311    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:42.574838    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:13:42.574854    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:13:42.574860    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:13:42.592809    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:13:42.592822    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:42.608850    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:42.608861    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:42.642072    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:42.642084    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:42.646890    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:13:42.646898    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:13:42.663331    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:13:42.663341    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:13:42.675322    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:13:42.675333    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:13:42.687127    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:42.687139    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:42.710374    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:42.710382    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:42.745529    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:13:42.745544    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:13:42.760437    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:13:42.760452    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:13:42.771854    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:13:42.771869    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:13:42.791633    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:13:42.791642    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:13:45.303428    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:41.939690    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:41.939700    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:41.955309    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:41.955320    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:41.966958    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:41.966971    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:13:41.979390    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:41.979403    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:41.992645    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:41.992655    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:42.034788    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:42.034805    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:42.048033    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:42.048044    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:42.059350    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:42.059360    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:44.584726    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:50.305508    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:50.305678    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:50.318866    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:13:50.318956    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:50.332428    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:13:50.332504    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:50.347173    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:13:50.347272    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:50.357846    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:13:50.357928    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:50.367922    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:13:50.368000    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:50.378416    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:13:50.378492    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:50.388702    4860 logs.go:276] 0 containers: []
	W0913 12:13:50.388716    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:50.388788    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:50.399156    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:13:50.399169    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:50.399175    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:50.403798    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:13:50.403805    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:13:50.415198    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:50.415209    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:50.439549    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:13:50.439559    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:50.451267    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:50.451278    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:50.485853    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:13:50.485861    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:13:50.500676    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:13:50.500687    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:13:50.514729    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:13:50.514740    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:13:50.525817    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:13:50.525829    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:13:50.540326    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:13:50.540338    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:13:50.552115    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:13:50.552127    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:13:50.569716    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:13:50.569726    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:13:50.581587    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:50.581597    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:49.586927    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:49.587286    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:49.616325    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:49.616482    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:49.638374    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:49.638467    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:49.651315    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:49.651403    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:49.662992    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:49.663079    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:49.673327    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:49.673405    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:49.692297    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:49.692374    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:49.702685    5002 logs.go:276] 0 containers: []
	W0913 12:13:49.702696    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:49.702766    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:49.713220    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:49.713238    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:49.713243    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:49.751944    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:49.751956    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:49.769553    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:49.769563    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:49.783780    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:49.783791    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:49.797383    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:49.797393    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:13:49.809680    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:49.809691    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:49.844354    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:49.844369    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:49.882619    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:49.882633    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:49.895048    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:49.895058    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:49.909433    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:49.909443    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:49.932342    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:49.932350    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:49.943998    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:49.944009    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:49.948067    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:49.948076    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:49.962297    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:49.962307    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:49.977248    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:49.977257    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:49.988897    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:49.988912    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:50.000100    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:50.000109    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:53.120275    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:52.513513    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:58.122540    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:58.122769    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:58.143745    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:13:58.143857    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:58.160979    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:13:58.161073    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:58.175551    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:13:58.175634    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:58.186520    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:13:58.186607    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:58.197813    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:13:58.197896    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:58.208628    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:13:58.208713    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:58.222701    4860 logs.go:276] 0 containers: []
	W0913 12:13:58.222714    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:58.222781    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:58.233189    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:13:58.233203    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:13:58.233208    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:13:58.249445    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:13:58.249455    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:13:58.265413    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:13:58.265424    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:13:58.277133    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:58.277144    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:58.313809    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:58.313821    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:58.318953    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:58.318960    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:58.355301    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:13:58.355318    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:13:58.369626    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:13:58.369635    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:13:58.381510    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:58.381519    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:58.406791    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:13:58.406798    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:13:58.420910    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:13:58.420923    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:13:58.440046    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:13:58.440059    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:13:58.451504    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:13:58.451515    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:00.966139    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:57.514211    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:57.514332    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:57.524801    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:57.524885    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:57.535443    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:57.535524    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:57.550369    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:57.550441    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:57.561042    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:57.561131    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:57.571331    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:57.571412    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:57.582079    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:57.582161    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:57.591789    5002 logs.go:276] 0 containers: []
	W0913 12:13:57.591801    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:57.591868    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:57.602540    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:57.602557    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:57.602563    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:57.614280    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:57.614291    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:57.651636    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:57.651645    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:57.686768    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:57.686781    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:57.724464    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:57.724478    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:57.738611    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:57.738621    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:57.753096    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:57.753107    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:57.764688    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:57.764700    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:57.783942    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:57.783956    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:57.788609    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:57.788619    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:57.802944    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:57.802954    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:57.823712    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:57.823725    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:57.835166    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:57.835176    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:57.860536    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:57.860547    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:57.872179    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:57.872189    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:57.886778    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:57.886793    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:57.902865    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:57.902879    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:00.416665    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:05.968231    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:05.968405    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:05.984297    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:14:05.984406    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:05.996765    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:14:05.996847    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:06.007568    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:14:06.007641    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:06.017804    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:14:06.017880    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:06.028140    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:14:06.028225    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:06.038674    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:14:06.038748    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:06.048429    4860 logs.go:276] 0 containers: []
	W0913 12:14:06.048442    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:06.048508    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:06.058610    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:14:06.058628    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:14:06.058633    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:14:06.070744    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:14:06.070758    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:14:06.090579    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:14:06.090591    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:14:06.102135    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:06.102150    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:06.106872    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:06.106879    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:06.141227    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:14:06.141238    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:14:06.159278    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:14:06.159287    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:14:06.171281    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:14:06.171290    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:14:06.186984    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:06.186994    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:06.211600    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:06.211610    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:06.246326    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:14:06.246336    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:14:06.260221    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:14:06.260234    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:14:06.271514    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:14:06.271525    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:05.418770    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:05.418973    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:05.437879    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:14:05.437985    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:05.451504    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:14:05.451597    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:05.463674    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:14:05.463759    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:05.474673    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:14:05.474749    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:05.485421    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:14:05.485503    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:05.497109    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:14:05.497193    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:05.507206    5002 logs.go:276] 0 containers: []
	W0913 12:14:05.507219    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:05.507286    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:05.521750    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:14:05.521768    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:05.521774    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:05.557740    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:14:05.557754    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:14:05.569186    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:14:05.569201    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:14:05.585894    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:14:05.585908    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:14:05.597911    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:14:05.597921    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:14:05.616218    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:05.616228    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:05.639302    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:14:05.639309    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:05.652168    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:14:05.652180    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:14:05.669707    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:14:05.669721    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:14:05.680772    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:14:05.680781    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:14:05.695119    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:14:05.695134    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:14:05.733154    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:14:05.733164    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:05.745527    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:05.745540    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:05.784425    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:05.784433    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:05.788981    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:14:05.788988    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:14:05.802525    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:14:05.802536    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:14:05.817672    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:14:05.817682    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:14:08.785728    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:08.330642    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:13.787161    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:13.787304    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:13.798647    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:14:13.798728    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:13.809687    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:14:13.809776    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:13.819878    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:14:13.819960    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:13.830959    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:14:13.831038    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:13.848035    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:14:13.848116    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:13.859154    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:14:13.859234    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:13.869561    4860 logs.go:276] 0 containers: []
	W0913 12:14:13.869582    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:13.869651    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:13.880232    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:14:13.880245    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:13.880252    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:13.915585    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:13.915593    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:13.950892    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:14:13.950903    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:14:13.963004    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:14:13.963016    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:14:13.975100    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:14:13.975112    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:14:13.987071    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:13.987083    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:14.011058    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:14.011066    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:14.015126    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:14:14.015132    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:14:14.028809    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:14:14.028822    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:14:14.042813    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:14:14.042824    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:14:14.057534    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:14:14.057545    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:14:14.086697    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:14:14.086707    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:14:14.102464    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:14:14.102475    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:13.331450    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:13.331633    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:13.350165    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:14:13.350265    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:13.363885    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:14:13.363974    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:13.381363    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:14:13.381448    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:13.391723    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:14:13.391808    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:13.401942    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:14:13.402023    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:13.412092    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:14:13.412170    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:13.430312    5002 logs.go:276] 0 containers: []
	W0913 12:14:13.430324    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:13.430395    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:13.441039    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:14:13.441057    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:13.441065    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:13.480229    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:14:13.480237    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:14:13.517512    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:14:13.517522    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:14:13.531715    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:14:13.531725    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:14:13.543993    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:14:13.544005    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:13.557335    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:14:13.557490    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:14:13.569764    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:14:13.569777    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:14:13.581617    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:14:13.581628    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:14:13.593288    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:14:13.593301    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:14:13.605247    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:14:13.605261    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:14:13.622084    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:14:13.622097    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:14:13.636265    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:14:13.636279    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:14:13.659396    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:13.659409    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:13.682578    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:14:13.682585    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:13.695100    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:13.695112    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:13.699931    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:13.699940    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:13.738785    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:14:13.738795    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:14:16.255484    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:16.615633    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:21.257686    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:21.257999    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:21.287898    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:14:21.288046    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:21.306556    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:14:21.306657    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:21.320725    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:14:21.320802    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:21.331610    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:14:21.331683    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:21.342176    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:14:21.342261    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:21.353400    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:14:21.353493    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:21.363938    5002 logs.go:276] 0 containers: []
	W0913 12:14:21.363950    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:21.364018    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:21.374549    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:14:21.374566    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:21.374572    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:21.413068    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:21.413086    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:21.448555    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:14:21.448570    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:14:21.487812    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:14:21.487829    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:14:21.500316    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:14:21.500327    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:21.513103    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:21.513115    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:21.517081    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:14:21.517089    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:14:21.531601    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:14:21.531612    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:14:21.545370    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:14:21.545380    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:14:21.557587    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:14:21.557598    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:14:21.568535    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:21.568546    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:21.592092    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:14:21.592100    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:14:21.603684    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:14:21.603698    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:14:21.633638    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:14:21.633648    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:14:21.648212    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:14:21.648224    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:14:21.671958    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:14:21.671969    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:21.686001    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:14:21.686013    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:14:21.617989    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:21.618097    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:21.629705    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:14:21.629795    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:21.643458    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:14:21.643549    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:21.655118    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:14:21.655203    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:21.666724    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:14:21.666802    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:21.677749    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:14:21.677843    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:21.689037    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:14:21.689123    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:21.705292    4860 logs.go:276] 0 containers: []
	W0913 12:14:21.705307    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:21.705379    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:21.716151    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:14:21.716165    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:21.716170    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:21.753049    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:14:21.753063    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:14:21.768572    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:14:21.768586    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:14:21.783051    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:14:21.783065    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:14:21.794721    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:14:21.794736    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:14:21.806297    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:14:21.806306    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:14:21.823946    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:14:21.823957    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:14:21.835943    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:21.835954    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:21.860926    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:14:21.860937    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:21.873101    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:21.873116    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:21.908210    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:21.908219    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:21.912844    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:14:21.912851    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:14:21.923848    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:14:21.923863    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:14:24.445271    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:24.203703    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:29.447382    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:29.447481    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:29.459146    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:14:29.459233    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:29.470692    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:14:29.470778    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:29.487814    4860 logs.go:276] 2 containers: [5e5d0ac313df e3a1ee1ec846]
	I0913 12:14:29.487897    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:29.499164    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:14:29.499242    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:29.509748    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:14:29.509836    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:29.520689    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:14:29.520774    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:29.532169    4860 logs.go:276] 0 containers: []
	W0913 12:14:29.532181    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:29.532253    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:29.543126    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:14:29.543142    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:14:29.543149    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:14:29.558290    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:14:29.558299    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:14:29.572723    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:14:29.572737    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:14:29.585848    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:14:29.585861    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:14:29.602137    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:14:29.602149    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:14:29.615134    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:29.615150    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:29.638984    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:14:29.638993    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:29.650575    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:29.650587    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:29.683285    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:29.683293    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:29.687461    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:29.687468    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:29.721559    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:14:29.721570    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:14:29.733240    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:14:29.733254    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:14:29.744704    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:14:29.744718    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:14:29.205892    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:29.206034    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:29.216935    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:14:29.217029    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:29.227351    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:14:29.227455    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:29.237848    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:14:29.237933    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:29.247920    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:14:29.248003    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:29.257987    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:14:29.258069    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:29.270002    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:14:29.270089    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:29.279923    5002 logs.go:276] 0 containers: []
	W0913 12:14:29.279935    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:29.280008    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:29.290408    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:14:29.290426    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:29.290431    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:29.329930    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:29.329939    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:29.334540    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:14:29.334547    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:14:29.348502    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:29.348515    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:29.382870    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:14:29.382881    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:14:29.420776    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:14:29.420787    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:14:29.435322    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:14:29.435332    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:14:29.460312    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:14:29.460320    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:14:29.477631    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:14:29.477644    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:29.490546    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:29.490555    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:29.515900    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:14:29.515914    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:29.528589    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:14:29.528605    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:14:29.543247    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:14:29.543256    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:14:29.556680    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:14:29.556695    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:14:29.573212    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:14:29.573220    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:14:29.592012    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:14:29.592028    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:14:29.604140    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:14:29.604151    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:14:32.262274    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:32.122912    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:37.264367    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:37.264486    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:37.281375    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:14:37.281456    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:37.292693    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:14:37.292778    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:37.305595    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:14:37.305683    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:37.317545    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:14:37.317631    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:37.329346    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:14:37.329431    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:37.340458    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:14:37.340542    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:37.352604    4860 logs.go:276] 0 containers: []
	W0913 12:14:37.352616    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:37.352687    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:37.365112    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:14:37.365131    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:37.365137    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:37.403120    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:14:37.403133    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:14:37.415380    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:14:37.415392    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:37.428128    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:37.428143    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:37.453561    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:14:37.453575    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:14:37.469546    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:14:37.469557    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:14:37.483595    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:14:37.483607    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:14:37.500092    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:14:37.500103    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:14:37.520778    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:14:37.520789    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:14:37.536104    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:14:37.536117    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:14:37.551025    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:14:37.551037    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:14:37.565023    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:14:37.565037    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:14:37.580309    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:37.580323    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:37.614851    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:37.614869    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:37.619388    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:14:37.619395    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:14:40.135551    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:37.125079    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:37.125249    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:37.137647    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:14:37.137743    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:37.148407    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:14:37.148490    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:37.158514    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:14:37.158602    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:37.169407    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:14:37.169496    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:37.180826    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:14:37.180905    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:37.191187    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:14:37.191269    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:37.202073    5002 logs.go:276] 0 containers: []
	W0913 12:14:37.202086    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:37.202163    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:37.212366    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:14:37.212384    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:14:37.212389    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:14:37.254132    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:14:37.254143    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:14:37.272076    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:14:37.272089    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:14:37.295526    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:14:37.295535    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:14:37.308543    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:14:37.308555    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:14:37.323512    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:14:37.323525    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:14:37.342877    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:14:37.342889    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:14:37.355622    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:14:37.355633    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:14:37.380258    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:14:37.380270    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:14:37.391735    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:14:37.391749    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:37.405029    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:37.405039    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:37.409397    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:14:37.409408    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:14:37.422254    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:14:37.422265    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:37.435661    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:14:37.435672    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:14:37.459490    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:37.459501    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:37.508644    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:37.508655    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:37.533309    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:37.533325    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:40.081420    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:45.137856    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:45.138026    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:45.156348    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:14:45.156440    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:45.171808    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:14:45.171897    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:45.184081    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:14:45.184165    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:45.197797    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:14:45.197879    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:45.209904    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:14:45.210037    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:45.221482    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:14:45.221559    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:45.237783    4860 logs.go:276] 0 containers: []
	W0913 12:14:45.237793    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:45.237865    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:45.251862    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:14:45.251880    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:14:45.251885    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:14:45.264579    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:14:45.264592    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:14:45.277669    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:14:45.277686    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:14:45.293656    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:14:45.293674    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:14:45.314420    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:45.314432    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:45.319160    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:14:45.319167    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:14:45.332234    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:45.332247    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:45.370215    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:14:45.370227    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:14:45.389152    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:14:45.389164    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:14:45.402636    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:14:45.402651    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:14:45.415087    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:45.415115    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:45.441816    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:14:45.441829    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:45.454270    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:45.454282    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:45.491685    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:14:45.491697    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:14:45.507369    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:14:45.507385    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:14:45.083708    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:45.084202    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:45.125957    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:14:45.126122    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:45.148503    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:14:45.148621    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:45.164663    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:14:45.164763    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:45.179915    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:14:45.180013    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:45.191857    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:14:45.191940    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:45.204075    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:14:45.204161    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:45.215147    5002 logs.go:276] 0 containers: []
	W0913 12:14:45.215158    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:45.215232    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:45.226875    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:14:45.226892    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:45.226898    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:45.231550    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:14:45.231561    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:14:45.247972    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:14:45.247984    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:14:45.265377    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:14:45.265386    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:14:45.282795    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:14:45.282812    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:14:45.295378    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:45.295387    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:45.320309    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:14:45.320317    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:14:45.335218    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:14:45.335228    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:14:45.347927    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:14:45.347939    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:45.361376    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:45.361389    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:45.398570    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:14:45.398582    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:14:45.411460    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:14:45.411472    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:14:45.424187    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:45.424200    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:45.464256    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:14:45.464279    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:14:45.479446    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:14:45.479459    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:14:45.518237    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:14:45.518250    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:14:45.537560    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:14:45.537578    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:48.020874    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:48.054370    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:53.023013    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:53.023296    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:53.046459    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:14:53.046596    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:53.061404    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:14:53.061498    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:53.075049    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:14:53.075137    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:53.086521    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:14:53.086603    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:53.097973    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:14:53.098053    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:53.110417    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:14:53.110505    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:53.121697    4860 logs.go:276] 0 containers: []
	W0913 12:14:53.121710    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:53.121785    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:53.134061    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:14:53.134080    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:53.134086    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:53.171115    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:14:53.171134    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:14:53.183956    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:14:53.183969    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:14:53.199342    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:14:53.199351    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:14:53.213015    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:53.213025    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:53.218209    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:53.218220    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:53.255018    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:14:53.255030    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:14:53.273040    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:14:53.273050    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:14:53.286152    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:14:53.286166    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:14:53.301108    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:14:53.301123    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:14:53.314261    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:14:53.314273    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:14:53.335365    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:14:53.335376    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:14:53.348496    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:14:53.348509    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:14:53.364416    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:53.364427    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:53.390540    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:14:53.390551    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:55.905574    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:53.054974    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:53.055101    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:53.069361    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:14:53.069452    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:53.081791    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:14:53.081877    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:53.100498    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:14:53.100559    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:53.112005    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:14:53.112067    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:53.123240    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:14:53.123306    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:53.134791    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:14:53.134877    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:53.145834    5002 logs.go:276] 0 containers: []
	W0913 12:14:53.145844    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:53.145915    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:53.157267    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:14:53.157284    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:53.157290    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:53.161760    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:14:53.161767    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:14:53.177488    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:14:53.177503    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:14:53.197987    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:14:53.197997    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:14:53.210992    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:53.211006    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:53.236081    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:14:53.236095    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:14:53.275362    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:14:53.275374    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:14:53.291273    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:14:53.291289    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:14:53.304072    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:14:53.304084    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:53.317063    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:14:53.317073    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:53.330831    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:14:53.330842    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:14:53.342966    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:53.342977    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:53.384218    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:53.384229    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:53.420600    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:14:53.420613    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:14:53.434668    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:14:53.434678    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:14:53.449155    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:14:53.449165    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:14:53.460804    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:14:53.460817    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:14:55.973985    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:00.907752    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:00.907937    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:00.921018    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:00.921096    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:00.931824    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:00.931909    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:00.942239    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:00.942308    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:00.952851    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:00.952918    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:00.963464    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:00.963545    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:00.974766    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:00.974805    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:00.986213    4860 logs.go:276] 0 containers: []
	W0913 12:15:00.986223    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:00.986261    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:00.997972    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:00.998022    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:00.998029    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:01.034277    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:01.034289    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:01.047274    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:01.047286    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:01.062366    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:01.062381    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:01.075393    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:01.075406    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:01.091588    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:01.091602    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:01.103855    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:01.103867    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:01.122451    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:01.122461    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:01.134692    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:01.134705    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:01.147454    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:01.147471    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:01.169738    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:01.169750    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:01.195032    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:01.195048    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:01.208397    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:01.208409    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:01.213881    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:01.213889    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:01.251848    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:01.251859    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:00.974245    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:00.974351    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:00.985745    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:15:00.985827    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:00.996820    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:15:00.996901    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:01.008629    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:15:01.008718    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:01.019413    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:15:01.019527    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:01.030673    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:15:01.030751    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:01.041883    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:15:01.041966    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:01.053382    5002 logs.go:276] 0 containers: []
	W0913 12:15:01.053394    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:01.053470    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:01.067895    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:15:01.067917    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:15:01.067923    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:15:01.081566    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:01.081578    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:01.106767    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:15:01.106781    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:15:01.121339    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:15:01.121350    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:15:01.136670    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:15:01.136678    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:15:01.149323    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:15:01.149333    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:15:01.165259    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:15:01.165270    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:15:01.184283    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:15:01.184293    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:01.199021    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:01.199032    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:01.239979    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:01.239993    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:01.245035    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:15:01.245045    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:15:01.260129    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:15:01.260143    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:15:01.273142    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:15:01.273152    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:15:01.284456    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:15:01.284467    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:15:01.295554    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:01.295564    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:01.332813    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:15:01.332824    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:15:01.371402    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:15:01.371412    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:15:03.767277    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:03.885158    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:08.769493    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:08.769631    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:08.780635    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:08.780718    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:08.791655    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:08.791750    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:08.802852    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:08.802934    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:08.813532    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:08.813601    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:08.824817    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:08.824897    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:08.835554    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:08.835637    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:08.851184    4860 logs.go:276] 0 containers: []
	W0913 12:15:08.851197    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:08.851270    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:08.861560    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:08.861578    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:08.861584    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:08.877173    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:08.877183    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:08.893910    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:08.893921    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:08.909582    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:08.909595    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:08.922301    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:08.922309    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:08.937641    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:08.937653    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:08.951342    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:08.951355    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:08.977824    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:08.977840    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:09.015944    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:09.015956    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:09.029945    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:09.029962    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:09.042711    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:09.042725    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:09.055349    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:09.055361    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:09.074336    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:09.074348    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:09.086772    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:09.086781    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:09.122726    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:09.122737    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:08.887390    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:08.887525    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:08.899224    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:15:08.899321    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:08.911297    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:15:08.911383    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:08.922212    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:15:08.922296    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:08.933699    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:15:08.933786    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:08.945791    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:15:08.945873    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:08.957531    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:15:08.957614    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:08.968254    5002 logs.go:276] 0 containers: []
	W0913 12:15:08.968267    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:08.968340    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:08.979313    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:15:08.979329    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:08.979343    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:08.984042    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:08.984052    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:09.022683    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:15:09.022697    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:15:09.038139    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:15:09.038152    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:15:09.050560    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:15:09.050572    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:15:09.064851    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:09.064864    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:09.105403    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:15:09.105419    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:15:09.120910    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:15:09.120921    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:15:09.137392    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:15:09.137403    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:15:09.155433    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:15:09.155442    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:15:09.169919    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:15:09.169930    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:15:09.208976    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:15:09.208988    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:15:09.221085    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:15:09.221096    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:15:09.232525    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:15:09.232537    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:15:09.248539    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:15:09.248549    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:15:09.265321    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:09.265333    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:09.288747    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:15:09.288757    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:11.804470    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:11.629748    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:16.804853    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:16.804970    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:16.816821    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:15:16.816898    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:16.828353    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:15:16.828437    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:16.839846    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:15:16.839927    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:16.851450    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:15:16.851530    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:16.863169    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:15:16.863251    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:16.875104    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:15:16.875183    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:16.886658    5002 logs.go:276] 0 containers: []
	W0913 12:15:16.886671    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:16.886742    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:16.898552    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:15:16.898575    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:15:16.898581    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:15:16.918307    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:16.918324    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:16.632011    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:16.632458    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:16.667456    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:16.667617    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:16.686754    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:16.686867    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:16.700546    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:16.700642    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:16.712208    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:16.712286    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:16.725337    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:16.725418    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:16.735856    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:16.735942    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:16.748180    4860 logs.go:276] 0 containers: []
	W0913 12:15:16.748192    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:16.748264    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:16.758974    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:16.758991    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:16.758997    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:16.792790    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:16.792807    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:16.808149    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:16.808159    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:16.821669    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:16.821681    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:16.838022    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:16.838039    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:16.851303    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:16.851315    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:16.864783    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:16.864794    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:16.878033    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:16.878048    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:16.891086    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:16.891098    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:16.911885    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:16.911897    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:16.924528    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:16.924539    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:16.950690    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:16.950709    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:16.987651    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:16.987661    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:16.993012    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:16.993019    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:17.011586    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:17.011599    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:19.526159    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:16.943208    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:15:16.943218    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:16.956468    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:16.956480    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:16.998439    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:16.998453    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:17.035783    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:15:17.035794    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:15:17.049939    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:15:17.049955    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:15:17.063012    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:15:17.063026    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:15:17.075585    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:15:17.075602    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:15:17.086962    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:17.086971    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:17.091186    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:15:17.091193    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:15:17.129454    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:15:17.129465    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:15:17.144397    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:15:17.144407    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:15:17.155554    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:15:17.155564    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:15:17.169714    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:15:17.169724    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:15:17.181590    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:15:17.181600    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:15:17.197506    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:15:17.197516    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:15:19.717593    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:24.528248    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:24.528572    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:24.555068    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:24.555199    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:24.574116    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:24.574214    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:24.587157    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:24.587238    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:24.598447    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:24.598526    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:24.608840    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:24.608919    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:24.624416    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:24.624490    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:24.634727    4860 logs.go:276] 0 containers: []
	W0913 12:15:24.634738    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:24.634807    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:24.645318    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:24.645337    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:24.645343    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:24.682741    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:24.682751    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:24.697614    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:24.697625    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:24.710171    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:24.710181    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:24.722505    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:24.722520    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:24.739408    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:24.739419    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:24.753519    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:24.753532    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:24.766766    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:24.766774    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:24.781255    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:24.781269    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:24.798792    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:24.798803    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:24.803863    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:24.803874    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:24.817105    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:24.817117    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:24.851948    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:24.851966    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:24.867675    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:24.867688    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:24.889119    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:24.889140    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:24.720010    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:24.720113    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:24.731592    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:15:24.731682    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:24.742917    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:15:24.743004    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:24.754416    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:15:24.754495    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:24.765703    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:15:24.765795    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:24.777228    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:15:24.777314    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:24.795006    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:15:24.795089    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:24.806659    5002 logs.go:276] 0 containers: []
	W0913 12:15:24.806671    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:24.806747    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:24.818288    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:15:24.818307    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:24.818314    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:24.854353    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:15:24.854363    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:15:24.871945    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:15:24.871956    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:15:24.887517    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:15:24.887529    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:15:24.906541    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:15:24.906555    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:15:24.934986    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:15:24.935002    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:24.946770    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:24.946786    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:24.983212    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:24.983222    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:24.987272    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:15:24.987280    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:15:25.004483    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:15:25.004493    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:15:25.015880    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:25.015891    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:25.037198    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:15:25.037208    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:15:25.075291    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:15:25.075301    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:15:25.089309    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:15:25.089318    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:15:25.101195    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:15:25.101212    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:15:25.113390    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:15:25.113400    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:15:25.128819    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:15:25.128829    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:15:27.416881    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:27.643034    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:32.418987    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:32.419176    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:32.431487    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:32.431564    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:32.441696    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:32.441762    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:32.451961    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:32.452030    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:32.462166    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:32.462245    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:32.472773    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:32.472854    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:32.487118    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:32.487201    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:32.497339    4860 logs.go:276] 0 containers: []
	W0913 12:15:32.497350    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:32.497415    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:32.508834    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:32.508850    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:32.508856    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:32.513195    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:32.513201    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:32.547928    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:32.547940    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:32.562787    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:32.562797    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:32.585159    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:32.585171    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:32.610362    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:32.610381    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:32.624327    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:32.624337    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:32.639927    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:32.639940    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:32.661828    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:32.661840    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:32.674356    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:32.674368    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:32.709974    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:32.709990    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:32.723899    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:32.723912    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:32.737220    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:32.737229    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:32.749522    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:32.749536    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:32.762011    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:32.762027    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:35.276478    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:32.643137    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:32.643274    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:32.654726    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:15:32.654804    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:32.665811    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:15:32.665896    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:32.678727    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:15:32.678812    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:32.690284    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:15:32.690372    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:32.701483    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:15:32.701568    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:32.713069    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:15:32.713151    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:32.724837    5002 logs.go:276] 0 containers: []
	W0913 12:15:32.724847    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:32.724924    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:32.736641    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:15:32.736660    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:15:32.736665    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:15:32.751987    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:15:32.751995    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:15:32.764658    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:15:32.764671    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:15:32.778112    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:15:32.778123    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:15:32.800620    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:32.800636    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:32.805497    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:32.805506    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:32.841344    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:15:32.841356    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:15:32.880898    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:15:32.880915    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:15:32.892871    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:15:32.892882    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:15:32.908698    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:32.908710    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:32.931265    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:32.931273    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:32.969133    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:15:32.969140    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:15:32.989643    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:15:32.989654    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:15:33.004233    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:15:33.004242    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:15:33.022357    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:15:33.022371    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:15:33.041218    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:15:33.041228    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:15:33.052662    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:15:33.052674    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:35.567029    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:40.278961    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:40.279528    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:40.316261    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:40.316428    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:40.335929    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:40.336036    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:40.350870    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:40.350963    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:40.363293    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:40.363381    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:40.373877    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:40.373951    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:40.384636    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:40.384716    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:40.396325    4860 logs.go:276] 0 containers: []
	W0913 12:15:40.396334    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:40.396400    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:40.406655    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:40.406673    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:40.406679    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:40.443163    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:40.443175    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:40.455882    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:40.455892    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:40.467836    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:40.467847    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:40.479825    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:40.479840    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:40.515357    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:40.515367    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:40.519850    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:40.519859    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:40.537884    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:40.537895    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:40.557135    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:40.557146    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:40.571502    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:40.571512    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:40.584093    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:40.584105    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:40.600334    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:40.600348    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:40.626743    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:40.626761    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:40.639797    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:40.639808    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:40.657284    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:40.657297    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:40.568324    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:40.568359    5002 kubeadm.go:597] duration metric: took 4m4.061955083s to restartPrimaryControlPlane
	W0913 12:15:40.568398    5002 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 12:15:40.568410    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0913 12:15:41.584732    5002 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.016351125s)
	I0913 12:15:41.584805    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 12:15:41.589916    5002 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 12:15:41.593183    5002 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 12:15:41.596192    5002 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 12:15:41.596198    5002 kubeadm.go:157] found existing configuration files:
	
	I0913 12:15:41.596229    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf
	I0913 12:15:41.598725    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 12:15:41.598751    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 12:15:41.601577    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf
	I0913 12:15:41.604918    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 12:15:41.604946    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 12:15:41.608187    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf
	I0913 12:15:41.610776    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 12:15:41.610806    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 12:15:41.613614    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf
	I0913 12:15:41.617056    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 12:15:41.617080    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 12:15:41.620302    5002 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 12:15:41.637167    5002 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0913 12:15:41.637209    5002 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 12:15:41.686767    5002 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 12:15:41.686818    5002 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 12:15:41.686870    5002 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 12:15:41.735948    5002 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 12:15:41.739200    5002 out.go:235]   - Generating certificates and keys ...
	I0913 12:15:41.739234    5002 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 12:15:41.739263    5002 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 12:15:41.739312    5002 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 12:15:41.739342    5002 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 12:15:41.739384    5002 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 12:15:41.739417    5002 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 12:15:41.739466    5002 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 12:15:41.739502    5002 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 12:15:41.739543    5002 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 12:15:41.739590    5002 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 12:15:41.739624    5002 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 12:15:41.739654    5002 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 12:15:41.957621    5002 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 12:15:42.048515    5002 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 12:15:42.120903    5002 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 12:15:42.268769    5002 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 12:15:42.297495    5002 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 12:15:42.297808    5002 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 12:15:42.297930    5002 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 12:15:42.386048    5002 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 12:15:43.171047    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:42.394145    5002 out.go:235]   - Booting up control plane ...
	I0913 12:15:42.394205    5002 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 12:15:42.394241    5002 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 12:15:42.394277    5002 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 12:15:42.394324    5002 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 12:15:42.394400    5002 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 12:15:46.891016    5002 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501278 seconds
	I0913 12:15:46.891081    5002 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 12:15:46.894761    5002 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 12:15:47.418898    5002 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 12:15:47.419179    5002 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-748000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 12:15:47.923478    5002 kubeadm.go:310] [bootstrap-token] Using token: uzec93.1x1zuwjayh1tkgqh
	I0913 12:15:47.929407    5002 out.go:235]   - Configuring RBAC rules ...
	I0913 12:15:47.929478    5002 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 12:15:47.929526    5002 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 12:15:47.933950    5002 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 12:15:47.934831    5002 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 12:15:47.935772    5002 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 12:15:47.936566    5002 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 12:15:47.939679    5002 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 12:15:48.114156    5002 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 12:15:48.327760    5002 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 12:15:48.328330    5002 kubeadm.go:310] 
	I0913 12:15:48.328403    5002 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 12:15:48.328411    5002 kubeadm.go:310] 
	I0913 12:15:48.328451    5002 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 12:15:48.328455    5002 kubeadm.go:310] 
	I0913 12:15:48.328471    5002 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 12:15:48.328497    5002 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 12:15:48.328523    5002 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 12:15:48.328528    5002 kubeadm.go:310] 
	I0913 12:15:48.328557    5002 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 12:15:48.328562    5002 kubeadm.go:310] 
	I0913 12:15:48.328666    5002 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 12:15:48.328671    5002 kubeadm.go:310] 
	I0913 12:15:48.328697    5002 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 12:15:48.328733    5002 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 12:15:48.328769    5002 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 12:15:48.328772    5002 kubeadm.go:310] 
	I0913 12:15:48.328845    5002 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 12:15:48.328892    5002 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 12:15:48.328897    5002 kubeadm.go:310] 
	I0913 12:15:48.329047    5002 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uzec93.1x1zuwjayh1tkgqh \
	I0913 12:15:48.329097    5002 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de4133d636792fabd6bfbc110ddb1dc6f2d65eb5bd69b961dc2a84dacbe86065 \
	I0913 12:15:48.329110    5002 kubeadm.go:310] 	--control-plane 
	I0913 12:15:48.329112    5002 kubeadm.go:310] 
	I0913 12:15:48.329157    5002 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 12:15:48.329160    5002 kubeadm.go:310] 
	I0913 12:15:48.329219    5002 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uzec93.1x1zuwjayh1tkgqh \
	I0913 12:15:48.329281    5002 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de4133d636792fabd6bfbc110ddb1dc6f2d65eb5bd69b961dc2a84dacbe86065 
	I0913 12:15:48.329345    5002 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 12:15:48.329357    5002 cni.go:84] Creating CNI manager for ""
	I0913 12:15:48.329365    5002 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:15:48.332151    5002 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 12:15:48.340254    5002 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 12:15:48.343363    5002 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 12:15:48.348715    5002 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 12:15:48.348819    5002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-748000 minikube.k8s.io/updated_at=2024_09_13T12_15_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=stopped-upgrade-748000 minikube.k8s.io/primary=true
	I0913 12:15:48.348824    5002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 12:15:48.391907    5002 kubeadm.go:1113] duration metric: took 43.138ms to wait for elevateKubeSystemPrivileges
	I0913 12:15:48.403691    5002 ops.go:34] apiserver oom_adj: -16
	I0913 12:15:48.403764    5002 kubeadm.go:394] duration metric: took 4m11.9115315s to StartCluster
	I0913 12:15:48.403777    5002 settings.go:142] acquiring lock: {Name:mk30414fb8bdc9357b580933d1c04157a3bd6358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:15:48.403864    5002 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:15:48.404261    5002 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/kubeconfig: {Name:mk70034871f305cb9ef95a7630262c04e6c4f7f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:15:48.404441    5002 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:15:48.404542    5002 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 12:15:48.404571    5002 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:15:48.404576    5002 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-748000"
	I0913 12:15:48.404583    5002 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-748000"
	W0913 12:15:48.404587    5002 addons.go:243] addon storage-provisioner should already be in state true
	I0913 12:15:48.404601    5002 host.go:66] Checking if "stopped-upgrade-748000" exists ...
	I0913 12:15:48.404605    5002 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-748000"
	I0913 12:15:48.404630    5002 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-748000"
	I0913 12:15:48.405664    5002 kapi.go:59] client config for stopped-upgrade-748000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/client.key", CAFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063b1540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 12:15:48.405783    5002 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-748000"
	W0913 12:15:48.405789    5002 addons.go:243] addon default-storageclass should already be in state true
	I0913 12:15:48.405796    5002 host.go:66] Checking if "stopped-upgrade-748000" exists ...
	I0913 12:15:48.408146    5002 out.go:177] * Verifying Kubernetes components...
	I0913 12:15:48.408581    5002 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 12:15:48.412182    5002 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 12:15:48.412193    5002 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/id_rsa Username:docker}
	I0913 12:15:48.416141    5002 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:15:48.173082    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:48.173203    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:48.190313    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:48.190408    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:48.203015    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:48.203103    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:48.215123    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:48.215215    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:48.227613    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:48.227698    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:48.238492    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:48.238573    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:48.249601    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:48.249676    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:48.260678    4860 logs.go:276] 0 containers: []
	W0913 12:15:48.260691    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:48.260763    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:48.272878    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:48.272898    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:48.272905    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:48.296797    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:48.296814    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:48.330948    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:48.330958    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:48.348919    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:48.348928    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:48.361560    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:48.361575    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:48.374190    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:48.374201    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:48.386221    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:48.386232    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:48.406886    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:48.406896    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:48.419798    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:48.419808    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:48.434579    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:48.434591    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:48.449673    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:48.449687    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:48.461579    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:48.461591    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:48.474711    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:48.474724    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:48.479737    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:48.479746    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:48.516186    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:48.516198    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:51.036264    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:48.420253    5002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:15:48.424148    5002 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 12:15:48.424161    5002 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 12:15:48.424171    5002 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/id_rsa Username:docker}
	I0913 12:15:48.519511    5002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 12:15:48.525238    5002 api_server.go:52] waiting for apiserver process to appear ...
	I0913 12:15:48.525336    5002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 12:15:48.529917    5002 api_server.go:72] duration metric: took 125.4675ms to wait for apiserver process to appear ...
	I0913 12:15:48.529926    5002 api_server.go:88] waiting for apiserver healthz status ...
	I0913 12:15:48.529934    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:48.567370    5002 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 12:15:48.585207    5002 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 12:15:48.935237    5002 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0913 12:15:48.935248    5002 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0913 12:15:56.038380    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:56.038556    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:56.051639    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:15:56.051730    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:56.062866    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:15:56.062941    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:56.073554    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:15:56.073643    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:56.085432    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:15:56.085515    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:56.096392    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:15:56.096479    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:56.106871    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:15:56.106956    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:56.117558    4860 logs.go:276] 0 containers: []
	W0913 12:15:56.117569    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:56.117635    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:56.128009    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:15:56.128027    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:15:56.128032    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:15:56.142548    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:15:56.142558    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:15:56.154398    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:15:56.154408    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:15:56.165932    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:15:56.165945    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:15:56.182974    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:56.182983    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:56.207830    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:56.207843    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:56.242074    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:15:56.242086    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:15:56.260034    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:15:56.260044    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:15:56.275362    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:15:56.275372    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:15:56.287986    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:15:56.288001    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:15:56.306924    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:15:56.306935    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:15:56.318997    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:15:56.319011    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:56.331108    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:56.331119    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:53.531869    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:53.531938    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:56.366640    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:56.366648    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:56.371261    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:15:56.371270    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:15:58.885516    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:58.532198    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:58.532233    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:03.887586    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:03.887758    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:16:03.900435    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:16:03.900525    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:16:03.911193    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:16:03.911277    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:16:03.921738    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:16:03.921823    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:16:03.932137    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:16:03.932218    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:16:03.942204    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:16:03.942288    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:16:03.953365    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:16:03.953440    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:16:03.965469    4860 logs.go:276] 0 containers: []
	W0913 12:16:03.965480    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:16:03.965553    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:16:03.983973    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:16:03.983990    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:16:03.983996    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:16:03.996290    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:16:03.996303    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:16:04.012287    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:16:04.012296    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:16:04.023874    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:16:04.023883    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:16:04.028355    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:16:04.028362    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:16:04.062296    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:16:04.062308    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:16:04.080468    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:16:04.080480    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:16:04.092448    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:16:04.092459    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:16:04.116106    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:16:04.116116    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:16:04.148896    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:16:04.148904    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:16:04.171214    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:16:04.171225    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:16:04.185369    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:16:04.185384    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:16:04.200833    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:16:04.200846    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:16:04.212792    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:16:04.212804    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:16:04.224738    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:16:04.224749    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:16:03.532444    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:03.532465    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:06.738351    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:08.533200    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:08.533226    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:11.740620    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:11.740878    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:16:11.763406    4860 logs.go:276] 1 containers: [086b0666273e]
	I0913 12:16:11.763531    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:16:11.779808    4860 logs.go:276] 1 containers: [dd8b5820b9e6]
	I0913 12:16:11.779905    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:16:11.796410    4860 logs.go:276] 4 containers: [71a9bcb3fca6 13129c22c063 5e5d0ac313df e3a1ee1ec846]
	I0913 12:16:11.796497    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:16:11.807121    4860 logs.go:276] 1 containers: [b29c9f05bb53]
	I0913 12:16:11.807196    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:16:11.817271    4860 logs.go:276] 1 containers: [0ab16e654516]
	I0913 12:16:11.817351    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:16:11.828666    4860 logs.go:276] 1 containers: [e6408d87eddb]
	I0913 12:16:11.828750    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:16:11.840337    4860 logs.go:276] 0 containers: []
	W0913 12:16:11.840349    4860 logs.go:278] No container was found matching "kindnet"
	I0913 12:16:11.840428    4860 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:16:11.850718    4860 logs.go:276] 1 containers: [b2d29a0663c8]
	I0913 12:16:11.850749    4860 logs.go:123] Gathering logs for kubelet ...
	I0913 12:16:11.850757    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:16:11.883507    4860 logs.go:123] Gathering logs for etcd [dd8b5820b9e6] ...
	I0913 12:16:11.883516    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd8b5820b9e6"
	I0913 12:16:11.898021    4860 logs.go:123] Gathering logs for coredns [e3a1ee1ec846] ...
	I0913 12:16:11.898032    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3a1ee1ec846"
	I0913 12:16:11.910103    4860 logs.go:123] Gathering logs for kube-proxy [0ab16e654516] ...
	I0913 12:16:11.910119    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab16e654516"
	I0913 12:16:11.922275    4860 logs.go:123] Gathering logs for Docker ...
	I0913 12:16:11.922289    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:16:11.947456    4860 logs.go:123] Gathering logs for container status ...
	I0913 12:16:11.947466    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:16:11.960582    4860 logs.go:123] Gathering logs for coredns [5e5d0ac313df] ...
	I0913 12:16:11.960596    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5d0ac313df"
	I0913 12:16:11.972900    4860 logs.go:123] Gathering logs for dmesg ...
	I0913 12:16:11.972914    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:16:11.977521    4860 logs.go:123] Gathering logs for kube-apiserver [086b0666273e] ...
	I0913 12:16:11.977527    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 086b0666273e"
	I0913 12:16:11.993122    4860 logs.go:123] Gathering logs for coredns [71a9bcb3fca6] ...
	I0913 12:16:11.993133    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a9bcb3fca6"
	I0913 12:16:12.007818    4860 logs.go:123] Gathering logs for coredns [13129c22c063] ...
	I0913 12:16:12.007830    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13129c22c063"
	I0913 12:16:12.019989    4860 logs.go:123] Gathering logs for kube-scheduler [b29c9f05bb53] ...
	I0913 12:16:12.020000    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b29c9f05bb53"
	I0913 12:16:12.035000    4860 logs.go:123] Gathering logs for kube-controller-manager [e6408d87eddb] ...
	I0913 12:16:12.035011    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6408d87eddb"
	I0913 12:16:12.052611    4860 logs.go:123] Gathering logs for storage-provisioner [b2d29a0663c8] ...
	I0913 12:16:12.052621    4860 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2d29a0663c8"
	I0913 12:16:12.065070    4860 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:16:12.065081    4860 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:16:14.601964    4860 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:13.533859    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:13.533891    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:18.535155    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:18.535199    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0913 12:16:18.936400    5002 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0913 12:16:18.940638    5002 out.go:177] * Enabled addons: storage-provisioner
	I0913 12:16:19.603965    4860 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:19.609651    4860 out.go:201] 
	W0913 12:16:19.613462    4860 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0913 12:16:19.613475    4860 out.go:270] * 
	W0913 12:16:19.614440    4860 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:16:19.625574    4860 out.go:201] 
	I0913 12:16:18.948555    5002 addons.go:510] duration metric: took 30.545290875s for enable addons: enabled=[storage-provisioner]
	I0913 12:16:23.536362    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:23.536402    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:28.538064    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:28.538104    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-09-13 19:07:29 UTC, ends at Fri 2024-09-13 19:16:35 UTC. --
	Sep 13 19:16:20 running-upgrade-383000 dockerd[2882]: time="2024-09-13T19:16:20.524243307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 13 19:16:20 running-upgrade-383000 dockerd[2882]: time="2024-09-13T19:16:20.524273723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 13 19:16:20 running-upgrade-383000 dockerd[2882]: time="2024-09-13T19:16:20.524279681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 13 19:16:20 running-upgrade-383000 dockerd[2882]: time="2024-09-13T19:16:20.524327720Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1fb41e3cd1a804f205f4ba708ec076a23b94352630cd4858986632b5fd4a3f4d pid=19154 runtime=io.containerd.runc.v2
	Sep 13 19:16:21 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:21Z" level=error msg="ContainerStats resp: {0x40005f0340 linux}"
	Sep 13 19:16:21 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:21Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 13 19:16:22 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:22Z" level=error msg="ContainerStats resp: {0x40003a2e00 linux}"
	Sep 13 19:16:22 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:22Z" level=error msg="ContainerStats resp: {0x400009d1c0 linux}"
	Sep 13 19:16:22 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:22Z" level=error msg="ContainerStats resp: {0x400009d9c0 linux}"
	Sep 13 19:16:22 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:22Z" level=error msg="ContainerStats resp: {0x400070dc40 linux}"
	Sep 13 19:16:22 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:22Z" level=error msg="ContainerStats resp: {0x40009d8a40 linux}"
	Sep 13 19:16:22 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:22Z" level=error msg="ContainerStats resp: {0x400078a100 linux}"
	Sep 13 19:16:22 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:22Z" level=error msg="ContainerStats resp: {0x400078a8c0 linux}"
	Sep 13 19:16:26 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:26Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 13 19:16:31 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:31Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 13 19:16:32 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:32Z" level=error msg="ContainerStats resp: {0x4000760580 linux}"
	Sep 13 19:16:32 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:32Z" level=error msg="ContainerStats resp: {0x40007606c0 linux}"
	Sep 13 19:16:33 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:33Z" level=error msg="ContainerStats resp: {0x40005f0300 linux}"
	Sep 13 19:16:34 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:34Z" level=error msg="ContainerStats resp: {0x40008b9000 linux}"
	Sep 13 19:16:34 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:34Z" level=error msg="ContainerStats resp: {0x40008b93c0 linux}"
	Sep 13 19:16:34 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:34Z" level=error msg="ContainerStats resp: {0x40005f15c0 linux}"
	Sep 13 19:16:34 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:34Z" level=error msg="ContainerStats resp: {0x40008b9d80 linux}"
	Sep 13 19:16:34 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:34Z" level=error msg="ContainerStats resp: {0x40005f1e40 linux}"
	Sep 13 19:16:34 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:34Z" level=error msg="ContainerStats resp: {0x400078a280 linux}"
	Sep 13 19:16:34 running-upgrade-383000 cri-dockerd[2723]: time="2024-09-13T19:16:34Z" level=error msg="ContainerStats resp: {0x400078a640 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	1fb41e3cd1a80       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   91dfc9ce81770
	46f7ae4c95e26       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   f307334a4e553
	71a9bcb3fca6f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   91dfc9ce81770
	13129c22c0636       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   f307334a4e553
	b2d29a0663c8f       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   ef94e2c952bf5
	0ab16e654516f       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   319274851effd
	b29c9f05bb536       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   9903ef65b2e20
	dd8b5820b9e68       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   377df59f7f5af
	e6408d87eddb0       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   f562c272fb974
	086b0666273e6       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   b8aa12060e330
	
	
	==> coredns [13129c22c063] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1708485273189147084.2624518485947777531. HINFO: read udp 10.244.0.3:43753->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1708485273189147084.2624518485947777531. HINFO: read udp 10.244.0.3:46745->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1708485273189147084.2624518485947777531. HINFO: read udp 10.244.0.3:33401->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1708485273189147084.2624518485947777531. HINFO: read udp 10.244.0.3:45594->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1708485273189147084.2624518485947777531. HINFO: read udp 10.244.0.3:50018->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1708485273189147084.2624518485947777531. HINFO: read udp 10.244.0.3:48133->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1708485273189147084.2624518485947777531. HINFO: read udp 10.244.0.3:45515->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1708485273189147084.2624518485947777531. HINFO: read udp 10.244.0.3:46294->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1708485273189147084.2624518485947777531. HINFO: read udp 10.244.0.3:56265->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1708485273189147084.2624518485947777531. HINFO: read udp 10.244.0.3:53820->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [1fb41e3cd1a8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7205065704929354061.925779512195602877. HINFO: read udp 10.244.0.2:52190->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7205065704929354061.925779512195602877. HINFO: read udp 10.244.0.2:36099->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7205065704929354061.925779512195602877. HINFO: read udp 10.244.0.2:56635->10.0.2.3:53: i/o timeout
	
	
	==> coredns [46f7ae4c95e2] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7857359170557739765.4743378600126887489. HINFO: read udp 10.244.0.3:56201->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7857359170557739765.4743378600126887489. HINFO: read udp 10.244.0.3:45631->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7857359170557739765.4743378600126887489. HINFO: read udp 10.244.0.3:47976->10.0.2.3:53: i/o timeout
	
	
	==> coredns [71a9bcb3fca6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6075074467588208871.1353341837119753005. HINFO: read udp 10.244.0.2:53471->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6075074467588208871.1353341837119753005. HINFO: read udp 10.244.0.2:51848->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6075074467588208871.1353341837119753005. HINFO: read udp 10.244.0.2:38915->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6075074467588208871.1353341837119753005. HINFO: read udp 10.244.0.2:39203->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6075074467588208871.1353341837119753005. HINFO: read udp 10.244.0.2:40115->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6075074467588208871.1353341837119753005. HINFO: read udp 10.244.0.2:37389->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6075074467588208871.1353341837119753005. HINFO: read udp 10.244.0.2:49996->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6075074467588208871.1353341837119753005. HINFO: read udp 10.244.0.2:45721->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6075074467588208871.1353341837119753005. HINFO: read udp 10.244.0.2:39236->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6075074467588208871.1353341837119753005. HINFO: read udp 10.244.0.2:45411->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-383000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-383000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92
	                    minikube.k8s.io/name=running-upgrade-383000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T12_12_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 19:12:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-383000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 19:16:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 19:12:18 +0000   Fri, 13 Sep 2024 19:12:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 19:12:18 +0000   Fri, 13 Sep 2024 19:12:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 19:12:18 +0000   Fri, 13 Sep 2024 19:12:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 19:12:18 +0000   Fri, 13 Sep 2024 19:12:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-383000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 70095824238642a7ac5011c07f554d6b
	  System UUID:                70095824238642a7ac5011c07f554d6b
	  Boot ID:                    0ee39550-2c4d-4046-a79c-a11b0d6de4fb
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-hlxct                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 coredns-6d4b75cb6d-tl4ft                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 etcd-running-upgrade-383000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-383000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-383000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-proxy-4l72s                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-383000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-383000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-383000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-383000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-383000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-383000 event: Registered Node running-upgrade-383000 in Controller
	
	
	==> dmesg <==
	[  +1.466334] systemd-fstab-generator[874]: Ignoring "noauto" for root device
	[  +0.066835] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.077605] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +1.136856] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.078345] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.073295] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.468795] systemd-fstab-generator[1287]: Ignoring "noauto" for root device
	[  +8.647164] systemd-fstab-generator[1841]: Ignoring "noauto" for root device
	[  +3.016200] systemd-fstab-generator[2200]: Ignoring "noauto" for root device
	[  +0.146999] systemd-fstab-generator[2234]: Ignoring "noauto" for root device
	[  +0.094684] systemd-fstab-generator[2245]: Ignoring "noauto" for root device
	[  +0.094763] systemd-fstab-generator[2258]: Ignoring "noauto" for root device
	[  +2.157194] kauditd_printk_skb: 47 callbacks suppressed
	[Sep13 19:08] systemd-fstab-generator[2680]: Ignoring "noauto" for root device
	[  +0.067780] systemd-fstab-generator[2691]: Ignoring "noauto" for root device
	[  +0.062841] systemd-fstab-generator[2702]: Ignoring "noauto" for root device
	[  +0.082246] systemd-fstab-generator[2716]: Ignoring "noauto" for root device
	[  +2.475772] systemd-fstab-generator[2869]: Ignoring "noauto" for root device
	[  +3.151399] systemd-fstab-generator[3624]: Ignoring "noauto" for root device
	[  +1.870700] systemd-fstab-generator[4192]: Ignoring "noauto" for root device
	[ +18.619584] kauditd_printk_skb: 68 callbacks suppressed
	[Sep13 19:12] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.563609] systemd-fstab-generator[12208]: Ignoring "noauto" for root device
	[  +5.626922] systemd-fstab-generator[12802]: Ignoring "noauto" for root device
	[  +0.473551] systemd-fstab-generator[12933]: Ignoring "noauto" for root device
	
	
	==> etcd [dd8b5820b9e6] <==
	{"level":"info","ts":"2024-09-13T19:12:14.358Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-13T19:12:14.358Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-13T19:12:14.358Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-13T19:12:14.358Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-13T19:12:14.358Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-13T19:12:14.359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-13T19:12:14.359Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-13T19:12:15.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-13T19:12:15.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-13T19:12:15.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-13T19:12:15.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-13T19:12:15.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-13T19:12:15.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-13T19:12:15.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-13T19:12:15.220Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T19:12:15.221Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T19:12:15.221Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T19:12:15.221Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T19:12:15.221Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:12:15.221Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-383000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T19:12:15.221Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T19:12:15.222Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T19:12:15.223Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-13T19:12:15.223Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T19:12:15.223Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:16:35 up 9 min,  0 users,  load average: 0.20, 0.22, 0.13
	Linux running-upgrade-383000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [086b0666273e] <==
	I0913 19:12:16.437951       1 controller.go:611] quota admission added evaluator for: namespaces
	I0913 19:12:16.483195       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0913 19:12:16.483239       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0913 19:12:16.483254       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0913 19:12:16.483272       1 cache.go:39] Caches are synced for autoregister controller
	I0913 19:12:16.483377       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0913 19:12:16.506807       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0913 19:12:17.213466       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0913 19:12:17.371199       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0913 19:12:17.374397       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0913 19:12:17.374421       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0913 19:12:17.520378       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0913 19:12:17.533262       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0913 19:12:17.644312       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0913 19:12:17.646278       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0913 19:12:17.646678       1 controller.go:611] quota admission added evaluator for: endpoints
	I0913 19:12:17.647923       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0913 19:12:18.509144       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0913 19:12:18.737746       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0913 19:12:18.741453       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0913 19:12:18.772125       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0913 19:12:18.790982       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0913 19:12:31.965863       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0913 19:12:32.266138       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0913 19:12:32.496780       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [e6408d87eddb] <==
	I0913 19:12:31.316392       1 shared_informer.go:262] Caches are synced for HPA
	I0913 19:12:31.317908       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0913 19:12:31.320487       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0913 19:12:31.324271       1 shared_informer.go:262] Caches are synced for PVC protection
	I0913 19:12:31.326478       1 shared_informer.go:262] Caches are synced for service account
	I0913 19:12:31.359735       1 shared_informer.go:262] Caches are synced for ephemeral
	I0913 19:12:31.359800       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0913 19:12:31.362143       1 shared_informer.go:262] Caches are synced for cronjob
	I0913 19:12:31.363099       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0913 19:12:31.364337       1 shared_informer.go:262] Caches are synced for PV protection
	I0913 19:12:31.364363       1 shared_informer.go:262] Caches are synced for endpoint
	I0913 19:12:31.422813       1 shared_informer.go:262] Caches are synced for deployment
	I0913 19:12:31.428906       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0913 19:12:31.509738       1 shared_informer.go:262] Caches are synced for stateful set
	I0913 19:12:31.513523       1 shared_informer.go:262] Caches are synced for disruption
	I0913 19:12:31.513533       1 disruption.go:371] Sending events to api server.
	I0913 19:12:31.568739       1 shared_informer.go:262] Caches are synced for resource quota
	I0913 19:12:31.569850       1 shared_informer.go:262] Caches are synced for resource quota
	I0913 19:12:31.968833       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4l72s"
	I0913 19:12:31.979813       1 shared_informer.go:262] Caches are synced for garbage collector
	I0913 19:12:32.003671       1 shared_informer.go:262] Caches are synced for garbage collector
	I0913 19:12:32.003682       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0913 19:12:32.269476       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0913 19:12:32.368380       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-tl4ft"
	I0913 19:12:32.374796       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-hlxct"
	
	
	==> kube-proxy [0ab16e654516] <==
	I0913 19:12:32.478866       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0913 19:12:32.478891       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0913 19:12:32.478903       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0913 19:12:32.494977       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0913 19:12:32.494987       1 server_others.go:206] "Using iptables Proxier"
	I0913 19:12:32.495003       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0913 19:12:32.495108       1 server.go:661] "Version info" version="v1.24.1"
	I0913 19:12:32.495112       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 19:12:32.495620       1 config.go:317] "Starting service config controller"
	I0913 19:12:32.495623       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0913 19:12:32.495631       1 config.go:226] "Starting endpoint slice config controller"
	I0913 19:12:32.495632       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0913 19:12:32.495799       1 config.go:444] "Starting node config controller"
	I0913 19:12:32.495801       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0913 19:12:32.596968       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0913 19:12:32.596968       1 shared_informer.go:262] Caches are synced for node config
	I0913 19:12:32.597004       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [b29c9f05bb53] <==
	W0913 19:12:16.434392       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 19:12:16.434395       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0913 19:12:16.434410       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 19:12:16.434774       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0913 19:12:16.434525       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0913 19:12:16.434785       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0913 19:12:16.434541       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 19:12:16.434790       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0913 19:12:16.434580       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 19:12:16.434794       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0913 19:12:16.434593       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0913 19:12:16.435049       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0913 19:12:16.434605       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0913 19:12:16.435099       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0913 19:12:16.434616       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 19:12:16.435108       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0913 19:12:17.341816       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 19:12:17.341876       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0913 19:12:17.346433       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 19:12:17.346574       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0913 19:12:17.387067       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 19:12:17.387093       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0913 19:12:17.422649       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0913 19:12:17.422661       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0913 19:12:17.722671       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-09-13 19:07:29 UTC, ends at Fri 2024-09-13 19:16:36 UTC. --
	Sep 13 19:12:20 running-upgrade-383000 kubelet[12808]: E0913 19:12:20.566412   12808 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-383000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-383000"
	Sep 13 19:12:20 running-upgrade-383000 kubelet[12808]: E0913 19:12:20.766959   12808 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-383000\" already exists" pod="kube-system/etcd-running-upgrade-383000"
	Sep 13 19:12:20 running-upgrade-383000 kubelet[12808]: I0913 19:12:20.962568   12808 request.go:601] Waited for 1.114829091s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 13 19:12:20 running-upgrade-383000 kubelet[12808]: E0913 19:12:20.967134   12808 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-383000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-383000"
	Sep 13 19:12:31 running-upgrade-383000 kubelet[12808]: I0913 19:12:31.292504   12808 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 13 19:12:31 running-upgrade-383000 kubelet[12808]: I0913 19:12:31.292795   12808 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 13 19:12:31 running-upgrade-383000 kubelet[12808]: I0913 19:12:31.317026   12808 topology_manager.go:200] "Topology Admit Handler"
	Sep 13 19:12:31 running-upgrade-383000 kubelet[12808]: I0913 19:12:31.493592   12808 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/98aa93b2-14bf-431b-8ae0-2c650cda2555-tmp\") pod \"storage-provisioner\" (UID: \"98aa93b2-14bf-431b-8ae0-2c650cda2555\") " pod="kube-system/storage-provisioner"
	Sep 13 19:12:31 running-upgrade-383000 kubelet[12808]: I0913 19:12:31.493626   12808 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqq4\" (UniqueName: \"kubernetes.io/projected/98aa93b2-14bf-431b-8ae0-2c650cda2555-kube-api-access-nqqq4\") pod \"storage-provisioner\" (UID: \"98aa93b2-14bf-431b-8ae0-2c650cda2555\") " pod="kube-system/storage-provisioner"
	Sep 13 19:12:31 running-upgrade-383000 kubelet[12808]: E0913 19:12:31.601277   12808 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 13 19:12:31 running-upgrade-383000 kubelet[12808]: E0913 19:12:31.601301   12808 projected.go:192] Error preparing data for projected volume kube-api-access-nqqq4 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 13 19:12:31 running-upgrade-383000 kubelet[12808]: E0913 19:12:31.601340   12808 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/98aa93b2-14bf-431b-8ae0-2c650cda2555-kube-api-access-nqqq4 podName:98aa93b2-14bf-431b-8ae0-2c650cda2555 nodeName:}" failed. No retries permitted until 2024-09-13 19:12:32.101325862 +0000 UTC m=+13.375981319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nqqq4" (UniqueName: "kubernetes.io/projected/98aa93b2-14bf-431b-8ae0-2c650cda2555-kube-api-access-nqqq4") pod "storage-provisioner" (UID: "98aa93b2-14bf-431b-8ae0-2c650cda2555") : configmap "kube-root-ca.crt" not found
	Sep 13 19:12:31 running-upgrade-383000 kubelet[12808]: I0913 19:12:31.971712   12808 topology_manager.go:200] "Topology Admit Handler"
	Sep 13 19:12:32 running-upgrade-383000 kubelet[12808]: I0913 19:12:32.102840   12808 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t48l\" (UniqueName: \"kubernetes.io/projected/dd9c4ad9-978d-4358-8fba-e0c8fc83facb-kube-api-access-2t48l\") pod \"kube-proxy-4l72s\" (UID: \"dd9c4ad9-978d-4358-8fba-e0c8fc83facb\") " pod="kube-system/kube-proxy-4l72s"
	Sep 13 19:12:32 running-upgrade-383000 kubelet[12808]: I0913 19:12:32.102883   12808 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dd9c4ad9-978d-4358-8fba-e0c8fc83facb-kube-proxy\") pod \"kube-proxy-4l72s\" (UID: \"dd9c4ad9-978d-4358-8fba-e0c8fc83facb\") " pod="kube-system/kube-proxy-4l72s"
	Sep 13 19:12:32 running-upgrade-383000 kubelet[12808]: I0913 19:12:32.102915   12808 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd9c4ad9-978d-4358-8fba-e0c8fc83facb-xtables-lock\") pod \"kube-proxy-4l72s\" (UID: \"dd9c4ad9-978d-4358-8fba-e0c8fc83facb\") " pod="kube-system/kube-proxy-4l72s"
	Sep 13 19:12:32 running-upgrade-383000 kubelet[12808]: I0913 19:12:32.102928   12808 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd9c4ad9-978d-4358-8fba-e0c8fc83facb-lib-modules\") pod \"kube-proxy-4l72s\" (UID: \"dd9c4ad9-978d-4358-8fba-e0c8fc83facb\") " pod="kube-system/kube-proxy-4l72s"
	Sep 13 19:12:32 running-upgrade-383000 kubelet[12808]: I0913 19:12:32.370601   12808 topology_manager.go:200] "Topology Admit Handler"
	Sep 13 19:12:32 running-upgrade-383000 kubelet[12808]: I0913 19:12:32.377603   12808 topology_manager.go:200] "Topology Admit Handler"
	Sep 13 19:12:32 running-upgrade-383000 kubelet[12808]: I0913 19:12:32.405761   12808 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16acf9a9-ef67-4212-93bc-23c7ab8dce2a-config-volume\") pod \"coredns-6d4b75cb6d-tl4ft\" (UID: \"16acf9a9-ef67-4212-93bc-23c7ab8dce2a\") " pod="kube-system/coredns-6d4b75cb6d-tl4ft"
	Sep 13 19:12:32 running-upgrade-383000 kubelet[12808]: I0913 19:12:32.405792   12808 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2swd\" (UniqueName: \"kubernetes.io/projected/16acf9a9-ef67-4212-93bc-23c7ab8dce2a-kube-api-access-q2swd\") pod \"coredns-6d4b75cb6d-tl4ft\" (UID: \"16acf9a9-ef67-4212-93bc-23c7ab8dce2a\") " pod="kube-system/coredns-6d4b75cb6d-tl4ft"
	Sep 13 19:12:32 running-upgrade-383000 kubelet[12808]: I0913 19:12:32.405803   12808 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06d01dca-1978-40bf-a298-2b60b34f91be-config-volume\") pod \"coredns-6d4b75cb6d-hlxct\" (UID: \"06d01dca-1978-40bf-a298-2b60b34f91be\") " pod="kube-system/coredns-6d4b75cb6d-hlxct"
	Sep 13 19:12:32 running-upgrade-383000 kubelet[12808]: I0913 19:12:32.405815   12808 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7dph\" (UniqueName: \"kubernetes.io/projected/06d01dca-1978-40bf-a298-2b60b34f91be-kube-api-access-h7dph\") pod \"coredns-6d4b75cb6d-hlxct\" (UID: \"06d01dca-1978-40bf-a298-2b60b34f91be\") " pod="kube-system/coredns-6d4b75cb6d-hlxct"
	Sep 13 19:16:21 running-upgrade-383000 kubelet[12808]: I0913 19:16:21.267909   12808 scope.go:110] "RemoveContainer" containerID="e3a1ee1ec846021fa9edb8cf1973de571ca8f8649c77543ee5ac00fc758ea420"
	Sep 13 19:16:21 running-upgrade-383000 kubelet[12808]: I0913 19:16:21.296641   12808 scope.go:110] "RemoveContainer" containerID="5e5d0ac313dfb80ad6536a10f8e1b58a26113f138eb79fc308a81631bd7e09af"
	
	
	==> storage-provisioner [b2d29a0663c8] <==
	I0913 19:12:32.505542       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 19:12:32.510520       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 19:12:32.510544       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 19:12:32.513737       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 19:12:32.513788       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-383000_3b5bf3c5-80df-4022-8bdf-100cb5212241!
	I0913 19:12:32.513969       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8525cb88-e487-43f2-be24-2fcf021ee7da", APIVersion:"v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-383000_3b5bf3c5-80df-4022-8bdf-100cb5212241 became leader
	I0913 19:12:32.614565       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-383000_3b5bf3c5-80df-4022-8bdf-100cb5212241!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-383000 -n running-upgrade-383000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-383000 -n running-upgrade-383000: exit status 2 (15.632448667s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-383000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-383000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-383000
--- FAIL: TestRunningBinaryUpgrade (596.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.33s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-965000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-965000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.80717075s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-965000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-965000" primary control-plane node in "kubernetes-upgrade-965000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-965000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:09:54.883251    4929 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:09:54.883405    4929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:09:54.883409    4929 out.go:358] Setting ErrFile to fd 2...
	I0913 12:09:54.883411    4929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:09:54.883527    4929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:09:54.884630    4929 out.go:352] Setting JSON to false
	I0913 12:09:54.901097    4929 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4157,"bootTime":1726250437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:09:54.901166    4929 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:09:54.906214    4929 out.go:177] * [kubernetes-upgrade-965000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:09:54.914107    4929 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:09:54.914145    4929 notify.go:220] Checking for updates...
	I0913 12:09:54.920096    4929 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:09:54.923128    4929 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:09:54.926169    4929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:09:54.929092    4929 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:09:54.932100    4929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:09:54.935303    4929 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:09:54.935372    4929 config.go:182] Loaded profile config "running-upgrade-383000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:09:54.935419    4929 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:09:54.939101    4929 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:09:54.944980    4929 start.go:297] selected driver: qemu2
	I0913 12:09:54.944985    4929 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:09:54.944991    4929 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:09:54.947183    4929 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:09:54.950133    4929 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:09:54.953187    4929 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 12:09:54.953203    4929 cni.go:84] Creating CNI manager for ""
	I0913 12:09:54.953230    4929 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0913 12:09:54.953269    4929 start.go:340] cluster config:
	{Name:kubernetes-upgrade-965000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:09:54.956765    4929 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:09:54.964069    4929 out.go:177] * Starting "kubernetes-upgrade-965000" primary control-plane node in "kubernetes-upgrade-965000" cluster
	I0913 12:09:54.968166    4929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 12:09:54.968183    4929 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 12:09:54.968200    4929 cache.go:56] Caching tarball of preloaded images
	I0913 12:09:54.968260    4929 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:09:54.968265    4929 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0913 12:09:54.968342    4929 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/kubernetes-upgrade-965000/config.json ...
	I0913 12:09:54.968353    4929 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/kubernetes-upgrade-965000/config.json: {Name:mk3e2ceb96fbed3c2f9b3c1f0efb8dbe44e45665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:09:54.968691    4929 start.go:360] acquireMachinesLock for kubernetes-upgrade-965000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:09:54.968723    4929 start.go:364] duration metric: took 23.875µs to acquireMachinesLock for "kubernetes-upgrade-965000"
	I0913 12:09:54.968732    4929 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-965000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:09:54.968759    4929 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:09:54.973123    4929 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:09:54.988615    4929 start.go:159] libmachine.API.Create for "kubernetes-upgrade-965000" (driver="qemu2")
	I0913 12:09:54.988637    4929 client.go:168] LocalClient.Create starting
	I0913 12:09:54.988695    4929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:09:54.988729    4929 main.go:141] libmachine: Decoding PEM data...
	I0913 12:09:54.988738    4929 main.go:141] libmachine: Parsing certificate...
	I0913 12:09:54.988779    4929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:09:54.988803    4929 main.go:141] libmachine: Decoding PEM data...
	I0913 12:09:54.988810    4929 main.go:141] libmachine: Parsing certificate...
	I0913 12:09:54.989210    4929 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:09:55.161993    4929 main.go:141] libmachine: Creating SSH key...
	I0913 12:09:55.263731    4929 main.go:141] libmachine: Creating Disk image...
	I0913 12:09:55.263737    4929 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:09:55.263948    4929 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/disk.qcow2
	I0913 12:09:55.273298    4929 main.go:141] libmachine: STDOUT: 
	I0913 12:09:55.273318    4929 main.go:141] libmachine: STDERR: 
	I0913 12:09:55.273380    4929 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/disk.qcow2 +20000M
	I0913 12:09:55.281489    4929 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:09:55.281503    4929 main.go:141] libmachine: STDERR: 
	I0913 12:09:55.281514    4929 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/disk.qcow2
	I0913 12:09:55.281520    4929 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:09:55.281533    4929 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:09:55.281561    4929 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:e3:4a:f8:56:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/disk.qcow2
	I0913 12:09:55.283066    4929 main.go:141] libmachine: STDOUT: 
	I0913 12:09:55.283079    4929 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:09:55.283099    4929 client.go:171] duration metric: took 294.468792ms to LocalClient.Create
	I0913 12:09:57.285196    4929 start.go:128] duration metric: took 2.316504792s to createHost
	I0913 12:09:57.285282    4929 start.go:83] releasing machines lock for "kubernetes-upgrade-965000", held for 2.316643042s
	W0913 12:09:57.285339    4929 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:09:57.293968    4929 out.go:177] * Deleting "kubernetes-upgrade-965000" in qemu2 ...
	W0913 12:09:57.326960    4929 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:09:57.326994    4929 start.go:729] Will try again in 5 seconds ...
	I0913 12:10:02.329002    4929 start.go:360] acquireMachinesLock for kubernetes-upgrade-965000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:10:02.329576    4929 start.go:364] duration metric: took 466.667µs to acquireMachinesLock for "kubernetes-upgrade-965000"
	I0913 12:10:02.329719    4929 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-965000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:10:02.330026    4929 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:10:02.334750    4929 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:10:02.385245    4929 start.go:159] libmachine.API.Create for "kubernetes-upgrade-965000" (driver="qemu2")
	I0913 12:10:02.385301    4929 client.go:168] LocalClient.Create starting
	I0913 12:10:02.385429    4929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:10:02.385501    4929 main.go:141] libmachine: Decoding PEM data...
	I0913 12:10:02.385519    4929 main.go:141] libmachine: Parsing certificate...
	I0913 12:10:02.385584    4929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:10:02.385631    4929 main.go:141] libmachine: Decoding PEM data...
	I0913 12:10:02.385697    4929 main.go:141] libmachine: Parsing certificate...
	I0913 12:10:02.386358    4929 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:10:02.558184    4929 main.go:141] libmachine: Creating SSH key...
	I0913 12:10:02.607000    4929 main.go:141] libmachine: Creating Disk image...
	I0913 12:10:02.607007    4929 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:10:02.607225    4929 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/disk.qcow2
	I0913 12:10:02.616512    4929 main.go:141] libmachine: STDOUT: 
	I0913 12:10:02.616534    4929 main.go:141] libmachine: STDERR: 
	I0913 12:10:02.616607    4929 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/disk.qcow2 +20000M
	I0913 12:10:02.624677    4929 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:10:02.624692    4929 main.go:141] libmachine: STDERR: 
	I0913 12:10:02.624703    4929 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/disk.qcow2
	I0913 12:10:02.624707    4929 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:10:02.624720    4929 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:10:02.624748    4929 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:06:28:92:4b:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/disk.qcow2
	I0913 12:10:02.626416    4929 main.go:141] libmachine: STDOUT: 
	I0913 12:10:02.626429    4929 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:10:02.626441    4929 client.go:171] duration metric: took 241.145042ms to LocalClient.Create
	I0913 12:10:04.627307    4929 start.go:128] duration metric: took 2.297352916s to createHost
	I0913 12:10:04.627327    4929 start.go:83] releasing machines lock for "kubernetes-upgrade-965000", held for 2.297793583s
	W0913 12:10:04.627421    4929 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-965000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-965000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:10:04.636506    4929 out.go:201] 
	W0913 12:10:04.643572    4929 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:10:04.643579    4929 out.go:270] * 
	* 
	W0913 12:10:04.644048    4929 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:10:04.650539    4929 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-965000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-965000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-965000: (3.133701958s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-965000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-965000 status --format={{.Host}}: exit status 7 (47.202333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-965000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-965000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.172388041s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-965000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-965000" primary control-plane node in "kubernetes-upgrade-965000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-965000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-965000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:10:07.873202    4963 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:10:07.873334    4963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:10:07.873338    4963 out.go:358] Setting ErrFile to fd 2...
	I0913 12:10:07.873340    4963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:10:07.873460    4963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:10:07.874538    4963 out.go:352] Setting JSON to false
	I0913 12:10:07.891115    4963 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4170,"bootTime":1726250437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:10:07.891186    4963 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:10:07.895986    4963 out.go:177] * [kubernetes-upgrade-965000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:10:07.903955    4963 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:10:07.903989    4963 notify.go:220] Checking for updates...
	I0913 12:10:07.910897    4963 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:10:07.912236    4963 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:10:07.914867    4963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:10:07.917975    4963 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:10:07.920965    4963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:10:07.924184    4963 config.go:182] Loaded profile config "kubernetes-upgrade-965000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0913 12:10:07.924429    4963 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:10:07.928903    4963 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 12:10:07.935917    4963 start.go:297] selected driver: qemu2
	I0913 12:10:07.935924    4963 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-965000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:10:07.935970    4963 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:10:07.938209    4963 cni.go:84] Creating CNI manager for ""
	I0913 12:10:07.938242    4963 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:10:07.938279    4963 start.go:340] cluster config:
	{Name:kubernetes-upgrade-965000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-965000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:10:07.941550    4963 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:10:07.947868    4963 out.go:177] * Starting "kubernetes-upgrade-965000" primary control-plane node in "kubernetes-upgrade-965000" cluster
	I0913 12:10:07.951913    4963 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:10:07.951929    4963 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:10:07.951945    4963 cache.go:56] Caching tarball of preloaded images
	I0913 12:10:07.952003    4963 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:10:07.952008    4963 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:10:07.952068    4963 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/kubernetes-upgrade-965000/config.json ...
	I0913 12:10:07.952562    4963 start.go:360] acquireMachinesLock for kubernetes-upgrade-965000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:10:07.952593    4963 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "kubernetes-upgrade-965000"
	I0913 12:10:07.952602    4963 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:10:07.952607    4963 fix.go:54] fixHost starting: 
	I0913 12:10:07.952729    4963 fix.go:112] recreateIfNeeded on kubernetes-upgrade-965000: state=Stopped err=<nil>
	W0913 12:10:07.952738    4963 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:10:07.959901    4963 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-965000" ...
	I0913 12:10:07.963914    4963 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:10:07.963951    4963 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:06:28:92:4b:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/disk.qcow2
	I0913 12:10:07.965799    4963 main.go:141] libmachine: STDOUT: 
	I0913 12:10:07.965815    4963 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:10:07.965844    4963 fix.go:56] duration metric: took 13.236125ms for fixHost
	I0913 12:10:07.965849    4963 start.go:83] releasing machines lock for "kubernetes-upgrade-965000", held for 13.251875ms
	W0913 12:10:07.965853    4963 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:10:07.965880    4963 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:10:07.965884    4963 start.go:729] Will try again in 5 seconds ...
	I0913 12:10:12.967817    4963 start.go:360] acquireMachinesLock for kubernetes-upgrade-965000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:10:12.968049    4963 start.go:364] duration metric: took 189.292µs to acquireMachinesLock for "kubernetes-upgrade-965000"
	I0913 12:10:12.968086    4963 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:10:12.968094    4963 fix.go:54] fixHost starting: 
	I0913 12:10:12.968390    4963 fix.go:112] recreateIfNeeded on kubernetes-upgrade-965000: state=Stopped err=<nil>
	W0913 12:10:12.968400    4963 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:10:12.972675    4963 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-965000" ...
	I0913 12:10:12.980042    4963 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:10:12.980149    4963 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:06:28:92:4b:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubernetes-upgrade-965000/disk.qcow2
	I0913 12:10:12.984366    4963 main.go:141] libmachine: STDOUT: 
	I0913 12:10:12.984397    4963 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:10:12.984431    4963 fix.go:56] duration metric: took 16.338583ms for fixHost
	I0913 12:10:12.984440    4963 start.go:83] releasing machines lock for "kubernetes-upgrade-965000", held for 16.3825ms
	W0913 12:10:12.984518    4963 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-965000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-965000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:10:12.992587    4963 out.go:201] 
	W0913 12:10:12.996588    4963 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:10:12.996599    4963 out.go:270] * 
	* 
	W0913 12:10:12.997639    4963 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:10:13.006559    4963 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-965000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-965000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-965000 version --output=json: exit status 1 (48.678209ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-965000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-13 12:10:13.066195 -0700 PDT m=+3021.537454210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-965000 -n kubernetes-upgrade-965000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-965000 -n kubernetes-upgrade-965000: exit status 7 (31.446166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-965000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-965000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-965000
--- FAIL: TestKubernetesUpgrade (18.33s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.56s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19636
- KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current364111554/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.56s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.16s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19636
- KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2571655281/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (575.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3304170522 start -p stopped-upgrade-748000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3304170522 start -p stopped-upgrade-748000 --memory=2200 --vm-driver=qemu2 : (40.715803333s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3304170522 -p stopped-upgrade-748000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3304170522 -p stopped-upgrade-748000 stop: (12.1171805s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-748000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0913 12:12:00.091520    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
E0913 12:13:53.297990    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 12:13:56.993962    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-748000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.885733167s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-748000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-748000" primary control-plane node in "stopped-upgrade-748000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-748000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:11:06.936912    5002 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:11:06.937099    5002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:11:06.937106    5002 out.go:358] Setting ErrFile to fd 2...
	I0913 12:11:06.937109    5002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:11:06.937247    5002 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:11:06.938432    5002 out.go:352] Setting JSON to false
	I0913 12:11:06.958403    5002 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4229,"bootTime":1726250437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:11:06.958479    5002 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:11:06.963406    5002 out.go:177] * [stopped-upgrade-748000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:11:06.971424    5002 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:11:06.971468    5002 notify.go:220] Checking for updates...
	I0913 12:11:06.976897    5002 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:11:06.980366    5002 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:11:06.983373    5002 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:11:06.986380    5002 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:11:06.989459    5002 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:11:06.992669    5002 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:11:06.996395    5002 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 12:11:06.999385    5002 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:11:07.003315    5002 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 12:11:07.010263    5002 start.go:297] selected driver: qemu2
	I0913 12:11:07.010269    5002 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 12:11:07.010317    5002 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:11:07.012872    5002 cni.go:84] Creating CNI manager for ""
	I0913 12:11:07.012910    5002 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:11:07.012938    5002 start.go:340] cluster config:
	{Name:stopped-upgrade-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 12:11:07.012992    5002 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:11:07.020195    5002 out.go:177] * Starting "stopped-upgrade-748000" primary control-plane node in "stopped-upgrade-748000" cluster
	I0913 12:11:07.024381    5002 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0913 12:11:07.024397    5002 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0913 12:11:07.024408    5002 cache.go:56] Caching tarball of preloaded images
	I0913 12:11:07.024475    5002 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:11:07.024481    5002 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0913 12:11:07.024531    5002 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/config.json ...
	I0913 12:11:07.025022    5002 start.go:360] acquireMachinesLock for stopped-upgrade-748000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:11:07.025056    5002 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "stopped-upgrade-748000"
	I0913 12:11:07.025064    5002 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:11:07.025070    5002 fix.go:54] fixHost starting: 
	I0913 12:11:07.025185    5002 fix.go:112] recreateIfNeeded on stopped-upgrade-748000: state=Stopped err=<nil>
	W0913 12:11:07.025193    5002 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:11:07.029149    5002 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-748000" ...
	I0913 12:11:07.037358    5002 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:11:07.037440    5002 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50476-:22,hostfwd=tcp::50477-:2376,hostname=stopped-upgrade-748000 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/disk.qcow2
	I0913 12:11:07.083988    5002 main.go:141] libmachine: STDOUT: 
	I0913 12:11:07.084017    5002 main.go:141] libmachine: STDERR: 
	I0913 12:11:07.084024    5002 main.go:141] libmachine: Waiting for VM to start (ssh -p 50476 docker@127.0.0.1)...
	I0913 12:11:26.759509    5002 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/config.json ...
	I0913 12:11:26.760093    5002 machine.go:93] provisionDockerMachine start ...
	I0913 12:11:26.760266    5002 main.go:141] libmachine: Using SSH client type: native
	I0913 12:11:26.760694    5002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd9190] 0x104ddb9d0 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I0913 12:11:26.760706    5002 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 12:11:26.848431    5002 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 12:11:26.848461    5002 buildroot.go:166] provisioning hostname "stopped-upgrade-748000"
	I0913 12:11:26.848617    5002 main.go:141] libmachine: Using SSH client type: native
	I0913 12:11:26.848904    5002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd9190] 0x104ddb9d0 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I0913 12:11:26.848915    5002 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-748000 && echo "stopped-upgrade-748000" | sudo tee /etc/hostname
	I0913 12:11:26.930448    5002 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-748000
	
	I0913 12:11:26.930547    5002 main.go:141] libmachine: Using SSH client type: native
	I0913 12:11:26.930746    5002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd9190] 0x104ddb9d0 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I0913 12:11:26.930768    5002 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-748000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-748000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-748000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 12:11:27.005807    5002 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 12:11:27.005822    5002 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19636-1170/.minikube CaCertPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19636-1170/.minikube}
	I0913 12:11:27.005832    5002 buildroot.go:174] setting up certificates
	I0913 12:11:27.005850    5002 provision.go:84] configureAuth start
	I0913 12:11:27.005855    5002 provision.go:143] copyHostCerts
	I0913 12:11:27.005952    5002 exec_runner.go:144] found /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.pem, removing ...
	I0913 12:11:27.005968    5002 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.pem
	I0913 12:11:27.006115    5002 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.pem (1078 bytes)
	I0913 12:11:27.006325    5002 exec_runner.go:144] found /Users/jenkins/minikube-integration/19636-1170/.minikube/cert.pem, removing ...
	I0913 12:11:27.006331    5002 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19636-1170/.minikube/cert.pem
	I0913 12:11:27.006403    5002 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19636-1170/.minikube/cert.pem (1123 bytes)
	I0913 12:11:27.006574    5002 exec_runner.go:144] found /Users/jenkins/minikube-integration/19636-1170/.minikube/key.pem, removing ...
	I0913 12:11:27.006579    5002 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19636-1170/.minikube/key.pem
	I0913 12:11:27.006649    5002 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19636-1170/.minikube/key.pem (1679 bytes)
	I0913 12:11:27.006755    5002 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-748000 san=[127.0.0.1 localhost minikube stopped-upgrade-748000]
	I0913 12:11:27.127564    5002 provision.go:177] copyRemoteCerts
	I0913 12:11:27.127620    5002 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 12:11:27.127629    5002 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/id_rsa Username:docker}
	I0913 12:11:27.161526    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 12:11:27.168599    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0913 12:11:27.175243    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 12:11:27.181850    5002 provision.go:87] duration metric: took 176.002209ms to configureAuth
	I0913 12:11:27.181862    5002 buildroot.go:189] setting minikube options for container-runtime
	I0913 12:11:27.181957    5002 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:11:27.182003    5002 main.go:141] libmachine: Using SSH client type: native
	I0913 12:11:27.182090    5002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd9190] 0x104ddb9d0 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I0913 12:11:27.182098    5002 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0913 12:11:27.246581    5002 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0913 12:11:27.246590    5002 buildroot.go:70] root file system type: tmpfs
	I0913 12:11:27.246643    5002 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0913 12:11:27.246693    5002 main.go:141] libmachine: Using SSH client type: native
	I0913 12:11:27.246797    5002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd9190] 0x104ddb9d0 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I0913 12:11:27.246832    5002 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0913 12:11:27.314070    5002 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0913 12:11:27.314144    5002 main.go:141] libmachine: Using SSH client type: native
	I0913 12:11:27.314252    5002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd9190] 0x104ddb9d0 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I0913 12:11:27.314262    5002 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0913 12:11:27.680963    5002 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0913 12:11:27.680984    5002 machine.go:96] duration metric: took 920.909875ms to provisionDockerMachine
	I0913 12:11:27.680991    5002 start.go:293] postStartSetup for "stopped-upgrade-748000" (driver="qemu2")
	I0913 12:11:27.680997    5002 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 12:11:27.681059    5002 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 12:11:27.681068    5002 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/id_rsa Username:docker}
	I0913 12:11:27.717028    5002 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 12:11:27.718736    5002 info.go:137] Remote host: Buildroot 2021.02.12
	I0913 12:11:27.718744    5002 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19636-1170/.minikube/addons for local assets ...
	I0913 12:11:27.718838    5002 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19636-1170/.minikube/files for local assets ...
	I0913 12:11:27.718960    5002 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19636-1170/.minikube/files/etc/ssl/certs/16952.pem -> 16952.pem in /etc/ssl/certs
	I0913 12:11:27.719098    5002 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 12:11:27.721962    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/files/etc/ssl/certs/16952.pem --> /etc/ssl/certs/16952.pem (1708 bytes)
	I0913 12:11:27.729072    5002 start.go:296] duration metric: took 48.07825ms for postStartSetup
	I0913 12:11:27.729086    5002 fix.go:56] duration metric: took 20.704838625s for fixHost
	I0913 12:11:27.729125    5002 main.go:141] libmachine: Using SSH client type: native
	I0913 12:11:27.729229    5002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dd9190] 0x104ddb9d0 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I0913 12:11:27.729234    5002 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 12:11:27.794642    5002 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726254687.943278879
	
	I0913 12:11:27.794650    5002 fix.go:216] guest clock: 1726254687.943278879
	I0913 12:11:27.794654    5002 fix.go:229] Guest: 2024-09-13 12:11:27.943278879 -0700 PDT Remote: 2024-09-13 12:11:27.729088 -0700 PDT m=+20.823215710 (delta=214.190879ms)
	I0913 12:11:27.794668    5002 fix.go:200] guest clock delta is within tolerance: 214.190879ms
	I0913 12:11:27.794673    5002 start.go:83] releasing machines lock for "stopped-upgrade-748000", held for 20.770434875s
	I0913 12:11:27.794747    5002 ssh_runner.go:195] Run: cat /version.json
	I0913 12:11:27.794756    5002 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/id_rsa Username:docker}
	I0913 12:11:27.794821    5002 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 12:11:27.794859    5002 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/id_rsa Username:docker}
	W0913 12:11:27.829966    5002 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0913 12:11:27.830020    5002 ssh_runner.go:195] Run: systemctl --version
	I0913 12:11:27.871804    5002 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 12:11:27.873521    5002 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 12:11:27.873563    5002 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0913 12:11:27.876982    5002 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0913 12:11:27.881856    5002 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 12:11:27.881866    5002 start.go:495] detecting cgroup driver to use...
	I0913 12:11:27.881957    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 12:11:27.888666    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0913 12:11:27.891582    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0913 12:11:27.894440    5002 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0913 12:11:27.894473    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0913 12:11:27.897658    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 12:11:27.901026    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0913 12:11:27.904279    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 12:11:27.907110    5002 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 12:11:27.910056    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0913 12:11:27.913406    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0913 12:11:27.916581    5002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0913 12:11:27.919435    5002 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 12:11:27.922048    5002 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 12:11:27.925021    5002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:11:28.003060    5002 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0913 12:11:28.013378    5002 start.go:495] detecting cgroup driver to use...
	I0913 12:11:28.013458    5002 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0913 12:11:28.018446    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 12:11:28.023362    5002 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 12:11:28.029759    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 12:11:28.035025    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 12:11:28.039693    5002 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0913 12:11:28.093380    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 12:11:28.098700    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 12:11:28.104148    5002 ssh_runner.go:195] Run: which cri-dockerd
	I0913 12:11:28.105566    5002 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0913 12:11:28.109009    5002 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0913 12:11:28.114158    5002 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0913 12:11:28.189529    5002 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0913 12:11:28.269282    5002 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0913 12:11:28.269359    5002 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0913 12:11:28.274722    5002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:11:28.351962    5002 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 12:11:29.512228    5002 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.160294084s)
	I0913 12:11:29.512302    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0913 12:11:29.518055    5002 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0913 12:11:29.526495    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 12:11:29.531074    5002 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0913 12:11:29.607290    5002 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0913 12:11:29.690681    5002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:11:29.776509    5002 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0913 12:11:29.782236    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 12:11:29.787301    5002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:11:29.849834    5002 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0913 12:11:29.888256    5002 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0913 12:11:29.888353    5002 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0913 12:11:29.890459    5002 start.go:563] Will wait 60s for crictl version
	I0913 12:11:29.890530    5002 ssh_runner.go:195] Run: which crictl
	I0913 12:11:29.891910    5002 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 12:11:29.906331    5002 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0913 12:11:29.906408    5002 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 12:11:29.921913    5002 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 12:11:29.942937    5002 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0913 12:11:29.943015    5002 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0913 12:11:29.944204    5002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 12:11:29.947742    5002 kubeadm.go:883] updating cluster {Name:stopped-upgrade-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0913 12:11:29.947795    5002 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0913 12:11:29.947849    5002 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 12:11:29.958453    5002 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 12:11:29.958462    5002 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0913 12:11:29.958516    5002 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 12:11:29.961793    5002 ssh_runner.go:195] Run: which lz4
	I0913 12:11:29.963159    5002 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 12:11:29.964510    5002 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 12:11:29.964519    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0913 12:11:30.899680    5002 docker.go:649] duration metric: took 936.598958ms to copy over tarball
	I0913 12:11:30.899748    5002 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 12:11:32.055314    5002 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.155596334s)
	I0913 12:11:32.055327    5002 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 12:11:32.070636    5002 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 12:11:32.074817    5002 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0913 12:11:32.080250    5002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:11:32.158381    5002 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 12:11:34.860184    5002 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.701891667s)
	I0913 12:11:34.860288    5002 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 12:11:34.870808    5002 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 12:11:34.870829    5002 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0913 12:11:34.870834    5002 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 12:11:34.875629    5002 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:11:34.877519    5002 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 12:11:34.879527    5002 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:11:34.880364    5002 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 12:11:34.881457    5002 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 12:11:34.881478    5002 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 12:11:34.883439    5002 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 12:11:34.883537    5002 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 12:11:34.884867    5002 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0913 12:11:34.884948    5002 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 12:11:34.886359    5002 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 12:11:34.886406    5002 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0913 12:11:34.887397    5002 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0913 12:11:34.887444    5002 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:11:34.888350    5002 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0913 12:11:34.889190    5002 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:11:35.316987    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0913 12:11:35.322882    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0913 12:11:35.325390    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 12:11:35.332919    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0913 12:11:35.334656    5002 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0913 12:11:35.334679    5002 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 12:11:35.334734    5002 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0913 12:11:35.341960    5002 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0913 12:11:35.341981    5002 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 12:11:35.342049    5002 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0913 12:11:35.356680    5002 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0913 12:11:35.356705    5002 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 12:11:35.356709    5002 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0913 12:11:35.356719    5002 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 12:11:35.356771    5002 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0913 12:11:35.356771    5002 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 12:11:35.362664    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0913 12:11:35.368282    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0913 12:11:35.370402    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0913 12:11:35.379970    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0913 12:11:35.380000    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0913 12:11:35.384160    5002 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0913 12:11:35.384179    5002 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0913 12:11:35.384232    5002 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0913 12:11:35.394333    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0913 12:11:35.394454    5002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0913 12:11:35.396036    5002 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0913 12:11:35.396049    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0913 12:11:35.402701    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0913 12:11:35.415514    5002 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0913 12:11:35.415667    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:11:35.438297    5002 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0913 12:11:35.438321    5002 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:11:35.438330    5002 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0913 12:11:35.438349    5002 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0913 12:11:35.438385    5002 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0913 12:11:35.438391    5002 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0913 12:11:35.480340    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0913 12:11:35.480341    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0913 12:11:35.480480    5002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0913 12:11:35.480480    5002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0913 12:11:35.494454    5002 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0913 12:11:35.494486    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0913 12:11:35.513123    5002 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0913 12:11:35.513155    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0913 12:11:35.558294    5002 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0913 12:11:35.558308    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0913 12:11:35.643343    5002 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0913 12:11:35.643378    5002 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0913 12:11:35.643386    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0913 12:11:35.703987    5002 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0913 12:11:35.704120    5002 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:11:35.745961    5002 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0913 12:11:35.745982    5002 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0913 12:11:35.745987    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0913 12:11:35.746017    5002 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0913 12:11:35.746035    5002 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:11:35.746099    5002 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:11:35.927084    5002 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0913 12:11:35.927119    5002 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0913 12:11:35.927262    5002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0913 12:11:35.928597    5002 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0913 12:11:35.928609    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0913 12:11:35.956719    5002 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0913 12:11:35.956733    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0913 12:11:36.192027    5002 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0913 12:11:36.192068    5002 cache_images.go:92] duration metric: took 1.321279833s to LoadCachedImages
	W0913 12:11:36.192100    5002 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0913 12:11:36.192106    5002 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0913 12:11:36.192160    5002 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-748000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 12:11:36.192252    5002 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0913 12:11:36.209948    5002 cni.go:84] Creating CNI manager for ""
	I0913 12:11:36.209967    5002 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:11:36.209973    5002 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 12:11:36.209982    5002 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-748000 NodeName:stopped-upgrade-748000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 12:11:36.210050    5002 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-748000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 12:11:36.210487    5002 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0913 12:11:36.213382    5002 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 12:11:36.213418    5002 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 12:11:36.217251    5002 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0913 12:11:36.222067    5002 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 12:11:36.227168    5002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0913 12:11:36.232568    5002 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0913 12:11:36.234070    5002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 12:11:36.237717    5002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:11:36.322586    5002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 12:11:36.332231    5002 certs.go:68] Setting up /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000 for IP: 10.0.2.15
	I0913 12:11:36.332245    5002 certs.go:194] generating shared ca certs ...
	I0913 12:11:36.332254    5002 certs.go:226] acquiring lock for ca certs: {Name:mka395184640c64d3892ae138bcca34b27eb400d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:11:36.332433    5002 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.key
	I0913 12:11:36.332485    5002 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.key
	I0913 12:11:36.332493    5002 certs.go:256] generating profile certs ...
	I0913 12:11:36.332569    5002 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/client.key
	I0913 12:11:36.332590    5002 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.key.1f099c47
	I0913 12:11:36.332600    5002 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.crt.1f099c47 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0913 12:11:36.375188    5002 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.crt.1f099c47 ...
	I0913 12:11:36.375203    5002 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.crt.1f099c47: {Name:mke754fdfe22cc0e0729d44e40da898b602d46bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:11:36.375719    5002 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.key.1f099c47 ...
	I0913 12:11:36.375727    5002 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.key.1f099c47: {Name:mk0d9dc37fb392f3d1ec39b7fcf3349303ce4783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:11:36.375882    5002 certs.go:381] copying /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.crt.1f099c47 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.crt
	I0913 12:11:36.376046    5002 certs.go:385] copying /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.key.1f099c47 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.key
	I0913 12:11:36.376200    5002 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/proxy-client.key
	I0913 12:11:36.376333    5002 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/1695.pem (1338 bytes)
	W0913 12:11:36.376366    5002 certs.go:480] ignoring /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/1695_empty.pem, impossibly tiny 0 bytes
	I0913 12:11:36.376372    5002 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 12:11:36.376391    5002 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem (1078 bytes)
	I0913 12:11:36.376415    5002 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem (1123 bytes)
	I0913 12:11:36.376436    5002 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/key.pem (1679 bytes)
	I0913 12:11:36.376478    5002 certs.go:484] found cert: /Users/jenkins/minikube-integration/19636-1170/.minikube/files/etc/ssl/certs/16952.pem (1708 bytes)
	I0913 12:11:36.376806    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 12:11:36.383777    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 12:11:36.390959    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 12:11:36.397610    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 12:11:36.405084    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 12:11:36.412424    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 12:11:36.419494    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 12:11:36.426116    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 12:11:36.433084    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/1695.pem --> /usr/share/ca-certificates/1695.pem (1338 bytes)
	I0913 12:11:36.440306    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/files/etc/ssl/certs/16952.pem --> /usr/share/ca-certificates/16952.pem (1708 bytes)
	I0913 12:11:36.446823    5002 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 12:11:36.453299    5002 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 12:11:36.458494    5002 ssh_runner.go:195] Run: openssl version
	I0913 12:11:36.460386    5002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16952.pem && ln -fs /usr/share/ca-certificates/16952.pem /etc/ssl/certs/16952.pem"
	I0913 12:11:36.463232    5002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16952.pem
	I0913 12:11:36.464547    5002 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 18:36 /usr/share/ca-certificates/16952.pem
	I0913 12:11:36.464574    5002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16952.pem
	I0913 12:11:36.466348    5002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16952.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 12:11:36.469474    5002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 12:11:36.472647    5002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 12:11:36.474179    5002 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 18:21 /usr/share/ca-certificates/minikubeCA.pem
	I0913 12:11:36.474252    5002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 12:11:36.476292    5002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 12:11:36.479487    5002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1695.pem && ln -fs /usr/share/ca-certificates/1695.pem /etc/ssl/certs/1695.pem"
	I0913 12:11:36.482286    5002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1695.pem
	I0913 12:11:36.483816    5002 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 18:36 /usr/share/ca-certificates/1695.pem
	I0913 12:11:36.483839    5002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1695.pem
	I0913 12:11:36.485610    5002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1695.pem /etc/ssl/certs/51391683.0"
	I0913 12:11:36.488955    5002 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 12:11:36.490395    5002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 12:11:36.492300    5002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 12:11:36.494159    5002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 12:11:36.496380    5002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 12:11:36.498373    5002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 12:11:36.500220    5002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 12:11:36.502234    5002 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-748000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 12:11:36.502306    5002 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 12:11:36.512965    5002 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 12:11:36.516076    5002 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 12:11:36.516086    5002 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 12:11:36.516110    5002 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 12:11:36.518878    5002 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 12:11:36.519175    5002 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-748000" does not appear in /Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:11:36.519268    5002 kubeconfig.go:62] /Users/jenkins/minikube-integration/19636-1170/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-748000" cluster setting kubeconfig missing "stopped-upgrade-748000" context setting]
	I0913 12:11:36.519480    5002 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/kubeconfig: {Name:mk70034871f305cb9ef95a7630262c04e6c4f7f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:11:36.519887    5002 kapi.go:59] client config for stopped-upgrade-748000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/client.key", CAFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063b1540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 12:11:36.520218    5002 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 12:11:36.522799    5002 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-748000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0913 12:11:36.522808    5002 kubeadm.go:1160] stopping kube-system containers ...
	I0913 12:11:36.522855    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 12:11:36.533034    5002 docker.go:483] Stopping containers: [5f8e3aa0e56c 72ed56d5e8b8 1a47681bea37 813eda68f74d a25d0b8881b1 a97dac85d1aa ece5ce1f1212 95ac3fc8a10e]
	I0913 12:11:36.533106    5002 ssh_runner.go:195] Run: docker stop 5f8e3aa0e56c 72ed56d5e8b8 1a47681bea37 813eda68f74d a25d0b8881b1 a97dac85d1aa ece5ce1f1212 95ac3fc8a10e
	I0913 12:11:36.544172    5002 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 12:11:36.549687    5002 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 12:11:36.552945    5002 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 12:11:36.552953    5002 kubeadm.go:157] found existing configuration files:
	
	I0913 12:11:36.552981    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf
	I0913 12:11:36.555791    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 12:11:36.555816    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 12:11:36.558195    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf
	I0913 12:11:36.561089    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 12:11:36.561112    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 12:11:36.564038    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf
	I0913 12:11:36.566427    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 12:11:36.566460    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 12:11:36.569331    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf
	I0913 12:11:36.572155    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 12:11:36.572176    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 12:11:36.574961    5002 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 12:11:36.577736    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 12:11:36.600379    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 12:11:37.048792    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 12:11:37.175493    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 12:11:37.198285    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 12:11:37.224868    5002 api_server.go:52] waiting for apiserver process to appear ...
	I0913 12:11:37.224954    5002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 12:11:37.727081    5002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 12:11:38.226994    5002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 12:11:38.232769    5002 api_server.go:72] duration metric: took 1.0079415s to wait for apiserver process to appear ...
	I0913 12:11:38.232778    5002 api_server.go:88] waiting for apiserver healthz status ...
	I0913 12:11:38.232788    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:43.234668    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:43.234697    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:48.234731    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:48.234786    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:53.235043    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:53.235086    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:11:58.235394    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:11:58.235428    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:03.235903    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:03.235935    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:08.236459    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:08.236482    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:13.237249    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:13.237270    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:18.238253    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:18.238286    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:23.239826    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:23.239946    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:28.242385    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:28.242433    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:33.244572    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:33.244594    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:38.246197    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:38.246431    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:12:38.263637    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:12:38.263743    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:12:38.276768    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:12:38.276864    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:12:38.287983    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:12:38.288069    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:12:38.298292    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:12:38.298374    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:12:38.309068    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:12:38.309155    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:12:38.319305    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:12:38.319384    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:12:38.329754    5002 logs.go:276] 0 containers: []
	W0913 12:12:38.329765    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:12:38.329835    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:12:38.340454    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:12:38.340471    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:12:38.340476    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:12:38.353084    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:12:38.353101    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:12:38.364570    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:12:38.364580    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:12:38.379687    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:12:38.379701    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:12:38.397549    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:12:38.397559    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:12:38.476640    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:12:38.476651    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:12:38.490683    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:12:38.490694    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:12:38.502468    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:12:38.502483    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:12:38.518494    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:12:38.518504    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:12:38.545429    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:12:38.545436    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:12:38.584309    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:12:38.584320    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:12:38.588531    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:12:38.588538    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:12:38.600725    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:12:38.600736    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:12:38.614761    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:12:38.614772    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:12:38.626132    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:12:38.626144    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:12:38.637691    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:12:38.637702    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:12:38.683989    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:12:38.684008    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:12:41.198068    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:46.200097    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:46.200345    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:12:46.220904    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:12:46.221024    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:12:46.236090    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:12:46.236188    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:12:46.248589    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:12:46.248668    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:12:46.263700    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:12:46.263782    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:12:46.274644    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:12:46.274731    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:12:46.285805    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:12:46.285898    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:12:46.295552    5002 logs.go:276] 0 containers: []
	W0913 12:12:46.295563    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:12:46.295634    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:12:46.306052    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:12:46.306070    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:12:46.306075    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:12:46.342950    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:12:46.342963    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:12:46.357144    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:12:46.357154    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:12:46.372632    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:12:46.372643    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:12:46.411291    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:12:46.411303    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:12:46.423385    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:12:46.423399    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:12:46.437181    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:12:46.437191    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:12:46.448759    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:12:46.448769    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:12:46.465748    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:12:46.465758    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:12:46.491264    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:12:46.491274    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:12:46.502857    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:12:46.502869    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:12:46.541238    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:12:46.541245    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:12:46.545746    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:12:46.545754    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:12:46.560818    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:12:46.560828    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:12:46.572048    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:12:46.572058    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:12:46.583909    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:12:46.583920    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:12:46.596715    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:12:46.596725    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:12:49.109821    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:12:54.112027    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:12:54.112424    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:12:54.140976    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:12:54.141112    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:12:54.163177    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:12:54.163287    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:12:54.176175    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:12:54.176265    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:12:54.187528    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:12:54.187616    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:12:54.202824    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:12:54.202905    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:12:54.213565    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:12:54.213641    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:12:54.224193    5002 logs.go:276] 0 containers: []
	W0913 12:12:54.224202    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:12:54.224265    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:12:54.234276    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:12:54.234295    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:12:54.234301    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:12:54.273282    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:12:54.273291    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:12:54.287592    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:12:54.287602    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:12:54.307090    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:12:54.307098    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:12:54.318501    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:12:54.318511    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:12:54.343570    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:12:54.343578    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:12:54.355741    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:12:54.355755    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:12:54.393384    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:12:54.393397    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:12:54.405560    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:12:54.405573    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:12:54.417670    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:12:54.417685    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:12:54.431923    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:12:54.431933    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:12:54.443071    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:12:54.443080    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:12:54.455055    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:12:54.455069    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:12:54.459626    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:12:54.459637    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:12:54.498124    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:12:54.498135    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:12:54.513306    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:12:54.513316    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:12:54.530587    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:12:54.530597    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:12:57.043571    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:02.045620    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:02.045854    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:02.069938    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:02.070074    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:02.088276    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:02.088376    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:02.100682    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:02.100765    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:02.111576    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:02.111655    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:02.122121    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:02.122204    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:02.132677    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:02.132754    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:02.143213    5002 logs.go:276] 0 containers: []
	W0913 12:13:02.143227    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:02.143298    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:02.154979    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:02.154995    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:02.155000    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:02.169758    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:02.169771    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:02.187117    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:02.187127    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:02.201905    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:02.201917    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:02.214031    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:02.214044    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:02.228744    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:02.228754    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:02.254467    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:02.254476    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:02.270903    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:02.270913    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:02.308521    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:02.308532    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:02.320445    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:02.320457    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:02.331953    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:02.331966    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:02.347588    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:02.347600    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:02.387190    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:02.387197    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:02.424391    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:02.424404    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:02.436606    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:02.436618    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:13:02.449004    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:02.449019    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:02.453564    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:02.453571    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:04.967323    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:09.969577    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:09.969865    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:09.994319    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:09.994436    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:10.011145    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:10.011240    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:10.024906    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:10.025000    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:10.037008    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:10.037102    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:10.053179    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:10.053267    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:10.064231    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:10.064323    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:10.075630    5002 logs.go:276] 0 containers: []
	W0913 12:13:10.075641    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:10.075714    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:10.086571    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:10.086587    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:10.086594    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:10.126961    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:10.126970    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:10.140902    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:10.140913    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:10.155049    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:10.155060    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:10.166799    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:10.166814    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:13:10.178768    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:10.178779    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:10.182915    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:10.182925    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:10.217088    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:10.217099    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:10.232385    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:10.232397    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:10.248641    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:10.248656    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:10.264253    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:10.264269    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:10.276198    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:10.276208    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:10.288272    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:10.288282    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:10.326869    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:10.326880    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:10.338805    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:10.338818    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:10.357303    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:10.357314    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:10.369090    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:10.369101    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:12.896543    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:17.898743    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:17.898971    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:17.924480    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:17.924575    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:17.937298    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:17.937389    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:17.947841    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:17.947915    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:17.957826    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:17.957905    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:17.974904    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:17.974983    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:17.985908    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:17.985991    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:17.996411    5002 logs.go:276] 0 containers: []
	W0913 12:13:17.996425    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:17.996494    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:18.006862    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:18.006881    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:18.006886    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:18.018771    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:18.018786    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:18.032708    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:18.032719    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:18.071175    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:18.071188    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:18.083405    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:18.083418    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:18.095205    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:18.095217    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:18.106765    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:18.106775    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:18.117407    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:18.117418    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:18.141020    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:18.141027    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:18.176390    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:18.176400    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:13:18.188659    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:18.188668    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:18.227083    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:18.227093    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:18.231175    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:18.231183    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:18.246300    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:18.246309    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:18.269471    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:18.269486    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:18.283299    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:18.283313    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:18.294360    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:18.294375    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:20.812014    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:25.814495    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:25.814792    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:25.844266    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:25.844416    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:25.862701    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:25.862805    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:25.875648    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:25.875725    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:25.891641    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:25.891729    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:25.907700    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:25.907781    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:25.917883    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:25.917956    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:25.928346    5002 logs.go:276] 0 containers: []
	W0913 12:13:25.928362    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:25.928435    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:25.939242    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:25.939262    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:25.939267    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:25.952746    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:25.952756    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:25.967269    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:25.967280    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:25.978544    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:25.978555    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:25.997546    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:25.997556    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:26.009823    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:26.009836    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:26.022591    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:26.022602    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:26.028171    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:26.028185    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:26.065803    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:26.065813    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:26.077664    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:26.077674    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:26.093350    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:26.093361    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:26.119423    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:26.119435    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:26.131439    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:26.131450    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:26.169402    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:26.169414    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:26.183685    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:26.183694    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:26.195604    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:26.195615    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:13:26.208087    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:26.208097    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:28.748416    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:33.749991    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:33.750506    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:33.780570    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:33.780731    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:33.799843    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:33.799940    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:33.813587    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:33.813683    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:33.825271    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:33.825362    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:33.846256    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:33.846332    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:33.857979    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:33.858067    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:33.869418    5002 logs.go:276] 0 containers: []
	W0913 12:13:33.869429    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:33.869502    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:33.880159    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:33.880177    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:33.880183    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:33.904191    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:33.904202    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:33.938350    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:33.938361    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:33.952660    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:33.952674    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:33.965056    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:33.965068    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:33.977200    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:33.977214    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:33.996714    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:33.996725    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:34.008893    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:34.008904    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:34.047565    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:34.047575    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:34.052098    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:34.052105    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:34.063889    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:34.063900    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:34.079503    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:34.079513    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:34.117684    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:34.117698    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:34.132423    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:34.132434    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:13:34.144871    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:34.144882    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:34.159007    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:34.159017    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:34.170871    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:34.170884    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:36.683907    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:41.686032    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:41.686191    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:41.699435    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:41.699509    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:41.710180    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:41.710249    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:41.720378    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:41.720447    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:41.730687    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:41.730777    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:41.746335    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:41.746417    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:41.757245    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:41.757321    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:41.767857    5002 logs.go:276] 0 containers: []
	W0913 12:13:41.767868    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:41.767943    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:41.778432    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:41.778449    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:41.778455    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:41.815472    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:41.815481    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:41.829868    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:41.829878    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:41.844451    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:41.844466    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:41.856097    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:41.856107    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:41.874067    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:41.874082    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:41.885815    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:41.885827    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:41.890087    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:41.890096    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:41.924714    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:41.924724    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:41.939690    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:41.939700    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:41.955309    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:41.955320    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:41.966958    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:41.966971    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:13:41.979390    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:41.979403    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:41.992645    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:41.992655    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:42.034788    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:42.034805    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:42.048033    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:42.048044    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:42.059350    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:42.059360    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:44.584726    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:49.586927    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:49.587286    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:49.616325    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:49.616482    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:49.638374    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:49.638467    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:49.651315    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:49.651403    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:49.662992    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:49.663079    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:49.673327    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:49.673405    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:49.692297    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:49.692374    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:49.702685    5002 logs.go:276] 0 containers: []
	W0913 12:13:49.702696    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:49.702766    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:49.713220    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:49.713238    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:49.713243    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:49.751944    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:49.751956    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:49.769553    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:49.769563    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:49.783780    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:49.783791    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:49.797383    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:49.797393    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:13:49.809680    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:49.809691    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:49.844354    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:49.844369    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:49.882619    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:49.882633    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:49.895048    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:49.895058    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:49.909433    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:49.909443    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:49.932342    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:49.932350    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:49.943998    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:49.944009    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:49.948067    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:49.948076    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:49.962297    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:49.962307    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:49.977248    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:49.977257    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:49.988897    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:49.988912    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:50.000100    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:50.000109    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:52.513513    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:13:57.514211    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:13:57.514332    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:13:57.524801    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:13:57.524885    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:13:57.535443    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:13:57.535524    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:13:57.550369    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:13:57.550441    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:13:57.561042    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:13:57.561131    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:13:57.571331    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:13:57.571412    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:13:57.582079    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:13:57.582161    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:13:57.591789    5002 logs.go:276] 0 containers: []
	W0913 12:13:57.591801    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:13:57.591868    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:13:57.602540    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:13:57.602557    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:13:57.602563    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:13:57.614280    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:13:57.614291    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:13:57.651636    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:13:57.651645    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:13:57.686768    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:13:57.686781    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:13:57.724464    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:13:57.724478    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:13:57.738611    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:13:57.738621    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:13:57.753096    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:13:57.753107    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:13:57.764688    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:13:57.764700    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:13:57.783942    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:13:57.783956    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:13:57.788609    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:13:57.788619    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:13:57.802944    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:13:57.802954    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:13:57.823712    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:13:57.823725    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:13:57.835166    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:13:57.835176    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:13:57.860536    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:13:57.860547    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:13:57.872179    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:13:57.872189    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:13:57.886778    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:13:57.886793    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:13:57.902865    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:13:57.902879    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:00.416665    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:05.418770    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:05.418973    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:05.437879    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:14:05.437985    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:05.451504    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:14:05.451597    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:05.463674    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:14:05.463759    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:05.474673    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:14:05.474749    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:05.485421    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:14:05.485503    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:05.497109    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:14:05.497193    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:05.507206    5002 logs.go:276] 0 containers: []
	W0913 12:14:05.507219    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:05.507286    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:05.521750    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:14:05.521768    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:05.521774    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:05.557740    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:14:05.557754    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:14:05.569186    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:14:05.569201    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:14:05.585894    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:14:05.585908    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:14:05.597911    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:14:05.597921    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:14:05.616218    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:05.616228    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:05.639302    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:14:05.639309    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:05.652168    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:14:05.652180    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:14:05.669707    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:14:05.669721    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:14:05.680772    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:14:05.680781    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:14:05.695119    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:14:05.695134    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:14:05.733154    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:14:05.733164    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:05.745527    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:05.745540    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:05.784425    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:05.784433    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:05.788981    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:14:05.788988    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:14:05.802525    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:14:05.802536    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:14:05.817672    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:14:05.817682    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:14:08.330642    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:13.331450    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:13.331633    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:13.350165    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:14:13.350265    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:13.363885    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:14:13.363974    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:13.381363    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:14:13.381448    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:13.391723    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:14:13.391808    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:13.401942    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:14:13.402023    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:13.412092    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:14:13.412170    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:13.430312    5002 logs.go:276] 0 containers: []
	W0913 12:14:13.430324    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:13.430395    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:13.441039    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:14:13.441057    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:13.441065    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:13.480229    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:14:13.480237    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:14:13.517512    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:14:13.517522    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:14:13.531715    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:14:13.531725    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:14:13.543993    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:14:13.544005    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:13.557335    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:14:13.557490    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:14:13.569764    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:14:13.569777    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:14:13.581617    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:14:13.581628    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:14:13.593288    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:14:13.593301    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:14:13.605247    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:14:13.605261    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:14:13.622084    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:14:13.622097    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:14:13.636265    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:14:13.636279    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:14:13.659396    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:13.659409    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:13.682578    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:14:13.682585    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:13.695100    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:13.695112    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:13.699931    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:13.699940    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:13.738785    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:14:13.738795    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:14:16.255484    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:21.257686    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:21.257999    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:21.287898    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:14:21.288046    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:21.306556    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:14:21.306657    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:21.320725    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:14:21.320802    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:21.331610    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:14:21.331683    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:21.342176    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:14:21.342261    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:21.353400    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:14:21.353493    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:21.363938    5002 logs.go:276] 0 containers: []
	W0913 12:14:21.363950    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:21.364018    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:21.374549    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:14:21.374566    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:21.374572    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:21.413068    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:21.413086    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:21.448555    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:14:21.448570    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:14:21.487812    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:14:21.487829    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:14:21.500316    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:14:21.500327    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:21.513103    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:21.513115    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:21.517081    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:14:21.517089    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:14:21.531601    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:14:21.531612    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:14:21.545370    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:14:21.545380    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:14:21.557587    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:14:21.557598    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:14:21.568535    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:21.568546    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:21.592092    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:14:21.592100    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:14:21.603684    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:14:21.603698    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:14:21.633638    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:14:21.633648    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:14:21.648212    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:14:21.648224    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:14:21.671958    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:14:21.671969    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:21.686001    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:14:21.686013    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:14:24.203703    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:29.205892    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:29.206034    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:29.216935    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:14:29.217029    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:29.227351    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:14:29.227455    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:29.237848    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:14:29.237933    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:29.247920    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:14:29.248003    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:29.257987    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:14:29.258069    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:29.270002    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:14:29.270089    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:29.279923    5002 logs.go:276] 0 containers: []
	W0913 12:14:29.279935    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:29.280008    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:29.290408    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:14:29.290426    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:29.290431    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:29.329930    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:29.329939    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:29.334540    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:14:29.334547    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:14:29.348502    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:29.348515    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:29.382870    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:14:29.382881    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:14:29.420776    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:14:29.420787    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:14:29.435322    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:14:29.435332    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:14:29.460312    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:14:29.460320    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:14:29.477631    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:14:29.477644    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:29.490546    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:29.490555    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:29.515900    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:14:29.515914    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:29.528589    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:14:29.528605    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:14:29.543247    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:14:29.543256    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:14:29.556680    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:14:29.556695    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:14:29.573212    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:14:29.573220    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:14:29.592012    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:14:29.592028    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:14:29.604140    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:14:29.604151    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:14:32.122912    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:37.125079    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:37.125249    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:37.137647    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:14:37.137743    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:37.148407    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:14:37.148490    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:37.158514    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:14:37.158602    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:37.169407    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:14:37.169496    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:37.180826    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:14:37.180905    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:37.191187    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:14:37.191269    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:37.202073    5002 logs.go:276] 0 containers: []
	W0913 12:14:37.202086    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:37.202163    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:37.212366    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:14:37.212384    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:14:37.212389    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:14:37.254132    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:14:37.254143    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:14:37.272076    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:14:37.272089    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:14:37.295526    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:14:37.295535    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:14:37.308543    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:14:37.308555    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:14:37.323512    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:14:37.323525    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:14:37.342877    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:14:37.342889    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:14:37.355622    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:14:37.355633    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:14:37.380258    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:14:37.380270    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:14:37.391735    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:14:37.391749    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:37.405029    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:37.405039    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:37.409397    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:14:37.409408    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:14:37.422254    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:14:37.422265    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:37.435661    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:14:37.435672    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:14:37.459490    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:37.459501    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:37.508644    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:37.508655    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:37.533309    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:37.533325    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:40.081420    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:45.083708    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:45.084202    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:45.125957    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:14:45.126122    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:45.148503    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:14:45.148621    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:45.164663    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:14:45.164763    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:45.179915    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:14:45.180013    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:45.191857    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:14:45.191940    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:45.204075    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:14:45.204161    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:45.215147    5002 logs.go:276] 0 containers: []
	W0913 12:14:45.215158    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:45.215232    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:45.226875    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:14:45.226892    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:45.226898    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:45.231550    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:14:45.231561    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:14:45.247972    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:14:45.247984    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:14:45.265377    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:14:45.265386    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:14:45.282795    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:14:45.282812    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:14:45.295378    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:45.295387    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:45.320309    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:14:45.320317    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:14:45.335218    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:14:45.335228    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:14:45.347927    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:14:45.347939    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:45.361376    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:45.361389    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:45.398570    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:14:45.398582    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:14:45.411460    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:14:45.411472    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:14:45.424187    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:45.424200    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:45.464256    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:14:45.464279    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:14:45.479446    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:14:45.479459    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:14:45.518237    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:14:45.518250    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:14:45.537560    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:14:45.537578    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:48.054370    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:14:53.054974    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:14:53.055101    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:14:53.069361    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:14:53.069452    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:14:53.081791    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:14:53.081877    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:14:53.100498    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:14:53.100559    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:14:53.112005    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:14:53.112067    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:14:53.123240    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:14:53.123306    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:14:53.134791    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:14:53.134877    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:14:53.145834    5002 logs.go:276] 0 containers: []
	W0913 12:14:53.145844    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:14:53.145915    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:14:53.157267    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:14:53.157284    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:14:53.157290    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:14:53.161760    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:14:53.161767    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:14:53.177488    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:14:53.177503    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:14:53.197987    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:14:53.197997    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:14:53.210992    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:14:53.211006    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:14:53.236081    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:14:53.236095    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:14:53.275362    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:14:53.275374    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:14:53.291273    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:14:53.291289    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:14:53.304072    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:14:53.304084    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:14:53.317063    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:14:53.317073    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:14:53.330831    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:14:53.330842    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:14:53.342966    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:14:53.342977    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:14:53.384218    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:14:53.384229    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:14:53.420600    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:14:53.420613    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:14:53.434668    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:14:53.434678    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:14:53.449155    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:14:53.449165    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:14:53.460804    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:14:53.460817    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:14:55.973985    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:00.974245    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:00.974351    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:00.985745    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:15:00.985827    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:00.996820    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:15:00.996901    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:01.008629    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:15:01.008718    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:01.019413    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:15:01.019527    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:01.030673    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:15:01.030751    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:01.041883    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:15:01.041966    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:01.053382    5002 logs.go:276] 0 containers: []
	W0913 12:15:01.053394    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:01.053470    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:01.067895    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:15:01.067917    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:15:01.067923    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:15:01.081566    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:01.081578    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:01.106767    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:15:01.106781    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:15:01.121339    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:15:01.121350    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:15:01.136670    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:15:01.136678    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:15:01.149323    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:15:01.149333    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:15:01.165259    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:15:01.165270    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:15:01.184283    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:15:01.184293    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:01.199021    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:01.199032    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:01.239979    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:01.239993    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:01.245035    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:15:01.245045    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:15:01.260129    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:15:01.260143    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:15:01.273142    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:15:01.273152    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:15:01.284456    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:15:01.284467    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:15:01.295554    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:01.295564    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:01.332813    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:15:01.332824    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:15:01.371402    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:15:01.371412    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:15:03.885158    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:08.887390    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:08.887525    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:08.899224    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:15:08.899321    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:08.911297    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:15:08.911383    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:08.922212    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:15:08.922296    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:08.933699    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:15:08.933786    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:08.945791    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:15:08.945873    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:08.957531    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:15:08.957614    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:08.968254    5002 logs.go:276] 0 containers: []
	W0913 12:15:08.968267    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:08.968340    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:08.979313    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:15:08.979329    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:08.979343    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:08.984042    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:08.984052    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:09.022683    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:15:09.022697    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:15:09.038139    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:15:09.038152    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:15:09.050560    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:15:09.050572    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:15:09.064851    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:09.064864    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:09.105403    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:15:09.105419    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:15:09.120910    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:15:09.120921    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:15:09.137392    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:15:09.137403    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:15:09.155433    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:15:09.155442    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:15:09.169919    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:15:09.169930    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:15:09.208976    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:15:09.208988    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:15:09.221085    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:15:09.221096    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:15:09.232525    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:15:09.232537    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:15:09.248539    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:15:09.248549    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:15:09.265321    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:09.265333    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:09.288747    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:15:09.288757    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:11.804470    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:16.804853    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:16.804970    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:16.816821    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:15:16.816898    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:16.828353    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:15:16.828437    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:16.839846    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:15:16.839927    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:16.851450    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:15:16.851530    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:16.863169    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:15:16.863251    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:16.875104    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:15:16.875183    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:16.886658    5002 logs.go:276] 0 containers: []
	W0913 12:15:16.886671    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:16.886742    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:16.898552    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:15:16.898575    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:15:16.898581    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:15:16.918307    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:16.918324    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:16.943208    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:15:16.943218    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:16.956468    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:16.956480    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:16.998439    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:16.998453    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:17.035783    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:15:17.035794    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:15:17.049939    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:15:17.049955    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:15:17.063012    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:15:17.063026    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:15:17.075585    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:15:17.075602    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:15:17.086962    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:17.086971    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:17.091186    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:15:17.091193    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:15:17.129454    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:15:17.129465    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:15:17.144397    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:15:17.144407    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:15:17.155554    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:15:17.155564    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:15:17.169714    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:15:17.169724    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:15:17.181590    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:15:17.181600    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:15:17.197506    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:15:17.197516    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:15:19.717593    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:24.720010    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:24.720113    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:24.731592    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:15:24.731682    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:24.742917    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:15:24.743004    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:24.754416    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:15:24.754495    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:24.765703    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:15:24.765795    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:24.777228    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:15:24.777314    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:24.795006    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:15:24.795089    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:24.806659    5002 logs.go:276] 0 containers: []
	W0913 12:15:24.806671    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:24.806747    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:24.818288    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:15:24.818307    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:24.818314    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:24.854353    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:15:24.854363    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:15:24.871945    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:15:24.871956    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:15:24.887517    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:15:24.887529    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:15:24.906541    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:15:24.906555    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:15:24.934986    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:15:24.935002    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:24.946770    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:24.946786    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:24.983212    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:24.983222    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:24.987272    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:15:24.987280    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:15:25.004483    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:15:25.004493    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:15:25.015880    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:25.015891    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:25.037198    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:15:25.037208    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:15:25.075291    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:15:25.075301    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:15:25.089309    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:15:25.089318    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:15:25.101195    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:15:25.101212    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:15:25.113390    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:15:25.113400    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:15:25.128819    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:15:25.128829    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:15:27.643034    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:32.643137    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:32.643274    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:15:32.654726    5002 logs.go:276] 2 containers: [bda262502d3f a25d0b8881b1]
	I0913 12:15:32.654804    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:15:32.665811    5002 logs.go:276] 2 containers: [b6fda8c8a560 1a47681bea37]
	I0913 12:15:32.665896    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:15:32.678727    5002 logs.go:276] 1 containers: [5e9078611379]
	I0913 12:15:32.678812    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:15:32.690284    5002 logs.go:276] 2 containers: [07fb5368fa22 a97dac85d1aa]
	I0913 12:15:32.690372    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:15:32.701483    5002 logs.go:276] 1 containers: [089a5b251714]
	I0913 12:15:32.701568    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:15:32.713069    5002 logs.go:276] 2 containers: [3cb3ed80eb53 5f8e3aa0e56c]
	I0913 12:15:32.713151    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:15:32.724837    5002 logs.go:276] 0 containers: []
	W0913 12:15:32.724847    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:15:32.724924    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:15:32.736641    5002 logs.go:276] 2 containers: [4598acb393fb 67f49ee7ce35]
	I0913 12:15:32.736660    5002 logs.go:123] Gathering logs for etcd [1a47681bea37] ...
	I0913 12:15:32.736665    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a47681bea37"
	I0913 12:15:32.751987    5002 logs.go:123] Gathering logs for kube-proxy [089a5b251714] ...
	I0913 12:15:32.751995    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089a5b251714"
	I0913 12:15:32.764658    5002 logs.go:123] Gathering logs for kube-controller-manager [5f8e3aa0e56c] ...
	I0913 12:15:32.764671    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f8e3aa0e56c"
	I0913 12:15:32.778112    5002 logs.go:123] Gathering logs for storage-provisioner [4598acb393fb] ...
	I0913 12:15:32.778123    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4598acb393fb"
	I0913 12:15:32.800620    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:15:32.800636    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:15:32.805497    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:15:32.805506    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:15:32.841344    5002 logs.go:123] Gathering logs for kube-apiserver [a25d0b8881b1] ...
	I0913 12:15:32.841356    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a25d0b8881b1"
	I0913 12:15:32.880898    5002 logs.go:123] Gathering logs for coredns [5e9078611379] ...
	I0913 12:15:32.880915    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e9078611379"
	I0913 12:15:32.892871    5002 logs.go:123] Gathering logs for kube-scheduler [07fb5368fa22] ...
	I0913 12:15:32.892882    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07fb5368fa22"
	I0913 12:15:32.908698    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:15:32.908710    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:15:32.931265    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:15:32.931273    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:15:32.969133    5002 logs.go:123] Gathering logs for kube-apiserver [bda262502d3f] ...
	I0913 12:15:32.969140    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bda262502d3f"
	I0913 12:15:32.989643    5002 logs.go:123] Gathering logs for etcd [b6fda8c8a560] ...
	I0913 12:15:32.989654    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fda8c8a560"
	I0913 12:15:33.004233    5002 logs.go:123] Gathering logs for kube-controller-manager [3cb3ed80eb53] ...
	I0913 12:15:33.004242    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cb3ed80eb53"
	I0913 12:15:33.022357    5002 logs.go:123] Gathering logs for kube-scheduler [a97dac85d1aa] ...
	I0913 12:15:33.022371    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a97dac85d1aa"
	I0913 12:15:33.041218    5002 logs.go:123] Gathering logs for storage-provisioner [67f49ee7ce35] ...
	I0913 12:15:33.041228    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67f49ee7ce35"
	I0913 12:15:33.052662    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:15:33.052674    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:15:35.567029    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:40.568324    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:40.568359    5002 kubeadm.go:597] duration metric: took 4m4.061955083s to restartPrimaryControlPlane
	W0913 12:15:40.568398    5002 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 12:15:40.568410    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0913 12:15:41.584732    5002 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.016351125s)
	I0913 12:15:41.584805    5002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 12:15:41.589916    5002 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 12:15:41.593183    5002 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 12:15:41.596192    5002 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 12:15:41.596198    5002 kubeadm.go:157] found existing configuration files:
	
	I0913 12:15:41.596229    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf
	I0913 12:15:41.598725    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 12:15:41.598751    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 12:15:41.601577    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf
	I0913 12:15:41.604918    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 12:15:41.604946    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 12:15:41.608187    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf
	I0913 12:15:41.610776    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 12:15:41.610806    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 12:15:41.613614    5002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf
	I0913 12:15:41.617056    5002 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 12:15:41.617080    5002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 12:15:41.620302    5002 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 12:15:41.637167    5002 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0913 12:15:41.637209    5002 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 12:15:41.686767    5002 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 12:15:41.686818    5002 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 12:15:41.686870    5002 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 12:15:41.735948    5002 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 12:15:41.739200    5002 out.go:235]   - Generating certificates and keys ...
	I0913 12:15:41.739234    5002 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 12:15:41.739263    5002 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 12:15:41.739312    5002 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 12:15:41.739342    5002 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 12:15:41.739384    5002 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 12:15:41.739417    5002 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 12:15:41.739466    5002 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 12:15:41.739502    5002 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 12:15:41.739543    5002 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 12:15:41.739590    5002 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 12:15:41.739624    5002 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 12:15:41.739654    5002 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 12:15:41.957621    5002 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 12:15:42.048515    5002 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 12:15:42.120903    5002 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 12:15:42.268769    5002 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 12:15:42.297495    5002 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 12:15:42.297808    5002 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 12:15:42.297930    5002 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 12:15:42.386048    5002 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 12:15:42.394145    5002 out.go:235]   - Booting up control plane ...
	I0913 12:15:42.394205    5002 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 12:15:42.394241    5002 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 12:15:42.394277    5002 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 12:15:42.394324    5002 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 12:15:42.394400    5002 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 12:15:46.891016    5002 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501278 seconds
	I0913 12:15:46.891081    5002 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 12:15:46.894761    5002 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 12:15:47.418898    5002 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 12:15:47.419179    5002 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-748000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 12:15:47.923478    5002 kubeadm.go:310] [bootstrap-token] Using token: uzec93.1x1zuwjayh1tkgqh
	I0913 12:15:47.929407    5002 out.go:235]   - Configuring RBAC rules ...
	I0913 12:15:47.929478    5002 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 12:15:47.929526    5002 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 12:15:47.933950    5002 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 12:15:47.934831    5002 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 12:15:47.935772    5002 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 12:15:47.936566    5002 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 12:15:47.939679    5002 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 12:15:48.114156    5002 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 12:15:48.327760    5002 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 12:15:48.328330    5002 kubeadm.go:310] 
	I0913 12:15:48.328403    5002 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 12:15:48.328411    5002 kubeadm.go:310] 
	I0913 12:15:48.328451    5002 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 12:15:48.328455    5002 kubeadm.go:310] 
	I0913 12:15:48.328471    5002 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 12:15:48.328497    5002 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 12:15:48.328523    5002 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 12:15:48.328528    5002 kubeadm.go:310] 
	I0913 12:15:48.328557    5002 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 12:15:48.328562    5002 kubeadm.go:310] 
	I0913 12:15:48.328666    5002 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 12:15:48.328671    5002 kubeadm.go:310] 
	I0913 12:15:48.328697    5002 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 12:15:48.328733    5002 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 12:15:48.328769    5002 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 12:15:48.328772    5002 kubeadm.go:310] 
	I0913 12:15:48.328845    5002 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 12:15:48.328892    5002 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 12:15:48.328897    5002 kubeadm.go:310] 
	I0913 12:15:48.329047    5002 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uzec93.1x1zuwjayh1tkgqh \
	I0913 12:15:48.329097    5002 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de4133d636792fabd6bfbc110ddb1dc6f2d65eb5bd69b961dc2a84dacbe86065 \
	I0913 12:15:48.329110    5002 kubeadm.go:310] 	--control-plane 
	I0913 12:15:48.329112    5002 kubeadm.go:310] 
	I0913 12:15:48.329157    5002 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 12:15:48.329160    5002 kubeadm.go:310] 
	I0913 12:15:48.329219    5002 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uzec93.1x1zuwjayh1tkgqh \
	I0913 12:15:48.329281    5002 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de4133d636792fabd6bfbc110ddb1dc6f2d65eb5bd69b961dc2a84dacbe86065 
	I0913 12:15:48.329345    5002 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 12:15:48.329357    5002 cni.go:84] Creating CNI manager for ""
	I0913 12:15:48.329365    5002 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:15:48.332151    5002 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 12:15:48.340254    5002 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 12:15:48.343363    5002 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 12:15:48.348715    5002 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 12:15:48.348819    5002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-748000 minikube.k8s.io/updated_at=2024_09_13T12_15_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fdd33bebc6743cfd1c61ec7fe066add478610a92 minikube.k8s.io/name=stopped-upgrade-748000 minikube.k8s.io/primary=true
	I0913 12:15:48.348824    5002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 12:15:48.391907    5002 kubeadm.go:1113] duration metric: took 43.138ms to wait for elevateKubeSystemPrivileges
	I0913 12:15:48.403691    5002 ops.go:34] apiserver oom_adj: -16
	I0913 12:15:48.403764    5002 kubeadm.go:394] duration metric: took 4m11.9115315s to StartCluster
	I0913 12:15:48.403777    5002 settings.go:142] acquiring lock: {Name:mk30414fb8bdc9357b580933d1c04157a3bd6358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:15:48.403864    5002 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:15:48.404261    5002 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/kubeconfig: {Name:mk70034871f305cb9ef95a7630262c04e6c4f7f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:15:48.404441    5002 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:15:48.404542    5002 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 12:15:48.404571    5002 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:15:48.404576    5002 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-748000"
	I0913 12:15:48.404583    5002 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-748000"
	W0913 12:15:48.404587    5002 addons.go:243] addon storage-provisioner should already be in state true
	I0913 12:15:48.404601    5002 host.go:66] Checking if "stopped-upgrade-748000" exists ...
	I0913 12:15:48.404605    5002 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-748000"
	I0913 12:15:48.404630    5002 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-748000"
	I0913 12:15:48.405664    5002 kapi.go:59] client config for stopped-upgrade-748000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/stopped-upgrade-748000/client.key", CAFile:"/Users/jenkins/minikube-integration/19636-1170/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063b1540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 12:15:48.405783    5002 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-748000"
	W0913 12:15:48.405789    5002 addons.go:243] addon default-storageclass should already be in state true
	I0913 12:15:48.405796    5002 host.go:66] Checking if "stopped-upgrade-748000" exists ...
	I0913 12:15:48.408146    5002 out.go:177] * Verifying Kubernetes components...
	I0913 12:15:48.408581    5002 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 12:15:48.412182    5002 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 12:15:48.412193    5002 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/id_rsa Username:docker}
	I0913 12:15:48.416141    5002 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 12:15:48.420253    5002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 12:15:48.424148    5002 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 12:15:48.424161    5002 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 12:15:48.424171    5002 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/stopped-upgrade-748000/id_rsa Username:docker}
	I0913 12:15:48.519511    5002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 12:15:48.525238    5002 api_server.go:52] waiting for apiserver process to appear ...
	I0913 12:15:48.525336    5002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 12:15:48.529917    5002 api_server.go:72] duration metric: took 125.4675ms to wait for apiserver process to appear ...
	I0913 12:15:48.529926    5002 api_server.go:88] waiting for apiserver healthz status ...
	I0913 12:15:48.529934    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:48.567370    5002 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 12:15:48.585207    5002 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 12:15:48.935237    5002 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0913 12:15:48.935248    5002 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0913 12:15:53.531869    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:53.531938    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:15:58.532198    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:15:58.532233    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:03.532444    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:03.532465    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:08.533200    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:08.533226    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:13.533859    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:13.533891    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:18.535155    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:18.535199    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0913 12:16:18.936400    5002 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0913 12:16:18.940638    5002 out.go:177] * Enabled addons: storage-provisioner
	I0913 12:16:18.948555    5002 addons.go:510] duration metric: took 30.545290875s for enable addons: enabled=[storage-provisioner]
	I0913 12:16:23.536362    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:23.536402    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:28.538064    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:28.538104    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:33.540115    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:33.540150    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:38.542167    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:38.542189    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:43.544254    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:43.544309    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:48.546470    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:48.546602    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:16:48.558331    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:16:48.558418    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:16:48.568806    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:16:48.568889    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:16:48.579556    5002 logs.go:276] 2 containers: [7bd7e96f8bf1 97d2b3004442]
	I0913 12:16:48.579638    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:16:48.590257    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:16:48.590340    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:16:48.600569    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:16:48.600653    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:16:48.610852    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:16:48.610937    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:16:48.620913    5002 logs.go:276] 0 containers: []
	W0913 12:16:48.620924    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:16:48.620989    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:16:48.631875    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:16:48.631890    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:16:48.631896    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:16:48.668307    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:16:48.668317    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:16:48.680296    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:16:48.680307    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:16:48.692155    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:16:48.692167    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:16:48.704578    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:16:48.704587    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:16:48.728128    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:16:48.728136    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:16:48.739697    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:16:48.739708    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:16:48.751700    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:16:48.751711    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:16:48.788735    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:16:48.788744    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:16:48.793238    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:16:48.793246    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:16:48.807846    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:16:48.807858    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:16:48.821953    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:16:48.821964    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:16:48.836800    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:16:48.836809    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:16:51.362118    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:16:56.364383    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:16:56.364968    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:16:56.405429    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:16:56.405615    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:16:56.427564    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:16:56.427685    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:16:56.443268    5002 logs.go:276] 2 containers: [7bd7e96f8bf1 97d2b3004442]
	I0913 12:16:56.443344    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:16:56.455290    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:16:56.455372    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:16:56.470577    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:16:56.470665    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:16:56.481083    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:16:56.481160    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:16:56.491067    5002 logs.go:276] 0 containers: []
	W0913 12:16:56.491083    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:16:56.491155    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:16:56.502521    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:16:56.502536    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:16:56.502541    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:16:56.540975    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:16:56.540988    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:16:56.545409    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:16:56.545418    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:16:56.579425    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:16:56.579437    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:16:56.593990    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:16:56.594001    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:16:56.608017    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:16:56.608029    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:16:56.619666    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:16:56.619676    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:16:56.630769    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:16:56.630779    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:16:56.645783    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:16:56.645792    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:16:56.657429    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:16:56.657439    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:16:56.678017    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:16:56.678027    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:16:56.695463    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:16:56.695474    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:16:56.718524    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:16:56.718530    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:16:59.231951    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:17:04.234543    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:17:04.234838    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:17:04.260262    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:17:04.260400    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:17:04.277199    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:17:04.277293    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:17:04.294390    5002 logs.go:276] 2 containers: [7bd7e96f8bf1 97d2b3004442]
	I0913 12:17:04.294475    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:17:04.305041    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:17:04.305120    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:17:04.315094    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:17:04.315176    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:17:04.325509    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:17:04.325595    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:17:04.335444    5002 logs.go:276] 0 containers: []
	W0913 12:17:04.335463    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:17:04.335531    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:17:04.345880    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:17:04.345895    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:17:04.345900    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:17:04.360934    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:17:04.360944    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:17:04.396887    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:17:04.396894    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:17:04.401048    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:17:04.401054    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:17:04.435917    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:17:04.435931    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:17:04.450202    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:17:04.450212    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:17:04.464315    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:17:04.464326    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:17:04.475825    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:17:04.475836    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:17:04.487076    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:17:04.487086    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:17:04.504185    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:17:04.504195    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:17:04.515722    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:17:04.515732    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:17:04.527610    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:17:04.527620    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:17:04.538881    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:17:04.538891    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:17:07.065215    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:17:12.067916    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:17:12.068421    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:17:12.110638    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:17:12.110812    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:17:12.133147    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:17:12.133275    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:17:12.148286    5002 logs.go:276] 2 containers: [7bd7e96f8bf1 97d2b3004442]
	I0913 12:17:12.148372    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:17:12.160602    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:17:12.160669    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:17:12.180067    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:17:12.180148    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:17:12.193844    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:17:12.193934    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:17:12.206345    5002 logs.go:276] 0 containers: []
	W0913 12:17:12.206357    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:17:12.206427    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:17:12.218668    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:17:12.218683    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:17:12.218689    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:17:12.223113    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:17:12.223123    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:17:12.262372    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:17:12.262388    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:17:12.280922    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:17:12.280945    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:17:12.305912    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:17:12.305922    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:17:12.317884    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:17:12.317892    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:17:12.338046    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:17:12.338057    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:17:12.349944    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:17:12.349954    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:17:12.370641    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:17:12.370652    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:17:12.407918    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:17:12.407929    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:17:12.422236    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:17:12.422248    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:17:12.441764    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:17:12.441774    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:17:12.453179    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:17:12.453195    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:17:14.966690    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:17:19.969298    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:17:19.969909    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:17:20.017036    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:17:20.017171    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:17:20.039200    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:17:20.039332    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:17:20.053251    5002 logs.go:276] 2 containers: [7bd7e96f8bf1 97d2b3004442]
	I0913 12:17:20.053337    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:17:20.069537    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:17:20.069614    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:17:20.080604    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:17:20.080685    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:17:20.094824    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:17:20.094909    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:17:20.105420    5002 logs.go:276] 0 containers: []
	W0913 12:17:20.105431    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:17:20.105499    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:17:20.116405    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:17:20.116423    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:17:20.116430    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:17:20.154440    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:17:20.154450    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:17:20.168404    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:17:20.168417    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:17:20.182800    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:17:20.182814    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:17:20.195737    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:17:20.195747    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:17:20.210801    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:17:20.210812    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:17:20.222585    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:17:20.222595    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:17:20.251142    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:17:20.251152    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:17:20.262897    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:17:20.262907    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:17:20.267808    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:17:20.267814    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:17:20.309735    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:17:20.309752    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:17:20.329423    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:17:20.329437    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:17:20.340971    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:17:20.340983    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:17:22.860354    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:17:27.862413    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:17:27.863144    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:17:27.897771    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:17:27.897938    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:17:27.918142    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:17:27.918256    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:17:27.933853    5002 logs.go:276] 2 containers: [7bd7e96f8bf1 97d2b3004442]
	I0913 12:17:27.933941    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:17:27.945918    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:17:27.945999    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:17:27.956405    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:17:27.956487    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:17:27.967041    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:17:27.967120    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:17:27.977232    5002 logs.go:276] 0 containers: []
	W0913 12:17:27.977243    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:17:27.977319    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:17:27.987654    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:17:27.987669    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:17:27.987675    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:17:28.002522    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:17:28.002534    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:17:28.014343    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:17:28.014353    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:17:28.039442    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:17:28.039449    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:17:28.050666    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:17:28.050678    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:17:28.089064    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:17:28.089071    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:17:28.092975    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:17:28.092982    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:17:28.127803    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:17:28.127817    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:17:28.143600    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:17:28.143613    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:17:28.161667    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:17:28.161677    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:17:28.175895    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:17:28.175908    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:17:28.196547    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:17:28.196557    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:17:28.208398    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:17:28.208409    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:17:30.722277    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:17:35.722690    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:17:35.723182    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:17:35.763534    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:17:35.763698    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:17:35.785272    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:17:35.785402    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:17:35.802227    5002 logs.go:276] 2 containers: [7bd7e96f8bf1 97d2b3004442]
	I0913 12:17:35.802324    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:17:35.815155    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:17:35.815237    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:17:35.826392    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:17:35.826475    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:17:35.837231    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:17:35.837317    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:17:35.847476    5002 logs.go:276] 0 containers: []
	W0913 12:17:35.847488    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:17:35.847557    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:17:35.858224    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:17:35.858239    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:17:35.858246    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:17:35.875475    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:17:35.875487    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:17:35.886709    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:17:35.886718    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:17:35.898231    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:17:35.898244    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:17:35.936817    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:17:35.936827    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:17:35.940878    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:17:35.940885    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:17:35.952599    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:17:35.952614    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:17:35.964787    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:17:35.964801    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:17:35.979223    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:17:35.979237    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:17:35.990461    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:17:35.990475    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:17:36.015754    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:17:36.015761    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:17:36.072206    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:17:36.072219    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:17:36.088251    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:17:36.088263    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:17:38.604302    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:17:43.606104    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:17:43.606656    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:17:43.648969    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:17:43.649128    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:17:43.671352    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:17:43.671493    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:17:43.686512    5002 logs.go:276] 2 containers: [7bd7e96f8bf1 97d2b3004442]
	I0913 12:17:43.686607    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:17:43.699092    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:17:43.699176    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:17:43.710349    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:17:43.710431    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:17:43.721196    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:17:43.721285    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:17:43.731121    5002 logs.go:276] 0 containers: []
	W0913 12:17:43.731138    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:17:43.731210    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:17:43.745263    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:17:43.745277    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:17:43.745282    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:17:43.783774    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:17:43.783782    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:17:43.818528    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:17:43.818540    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:17:43.830460    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:17:43.830470    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:17:43.841809    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:17:43.841819    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:17:43.855775    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:17:43.855783    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:17:43.860217    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:17:43.860223    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:17:43.881638    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:17:43.881649    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:17:43.896098    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:17:43.896108    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:17:43.908012    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:17:43.908022    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:17:43.925917    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:17:43.925927    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:17:43.937513    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:17:43.937526    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:17:43.960622    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:17:43.960629    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:17:46.473187    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:17:51.475430    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:17:51.475929    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:17:51.511128    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:17:51.511297    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:17:51.531697    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:17:51.531813    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:17:51.546035    5002 logs.go:276] 2 containers: [7bd7e96f8bf1 97d2b3004442]
	I0913 12:17:51.546122    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:17:51.558203    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:17:51.558293    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:17:51.569082    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:17:51.569164    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:17:51.583960    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:17:51.584034    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:17:51.594778    5002 logs.go:276] 0 containers: []
	W0913 12:17:51.594789    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:17:51.594862    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:17:51.609821    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:17:51.609837    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:17:51.609843    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:17:51.625219    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:17:51.625229    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:17:51.648213    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:17:51.648219    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:17:51.662305    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:17:51.662318    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:17:51.667293    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:17:51.667302    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:17:51.700184    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:17:51.700196    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:17:51.714512    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:17:51.714527    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:17:51.728127    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:17:51.728136    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:17:51.739552    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:17:51.739562    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:17:51.756621    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:17:51.756631    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:17:51.774009    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:17:51.774024    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:17:51.810297    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:17:51.810305    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:17:51.822721    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:17:51.822734    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:17:54.341257    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:17:59.343318    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:17:59.343911    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:17:59.382814    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:17:59.382981    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:17:59.404220    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:17:59.404360    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:17:59.419838    5002 logs.go:276] 2 containers: [7bd7e96f8bf1 97d2b3004442]
	I0913 12:17:59.419923    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:17:59.432602    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:17:59.432677    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:17:59.443520    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:17:59.443604    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:17:59.454825    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:17:59.454894    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:17:59.468847    5002 logs.go:276] 0 containers: []
	W0913 12:17:59.468859    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:17:59.468928    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:17:59.479323    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:17:59.479338    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:17:59.479344    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:17:59.497303    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:17:59.497315    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:17:59.511310    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:17:59.511324    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:17:59.533249    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:17:59.533268    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:17:59.562756    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:17:59.562772    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:17:59.594408    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:17:59.594421    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:17:59.622732    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:17:59.622749    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:17:59.650449    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:17:59.650463    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:17:59.673288    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:17:59.673299    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:17:59.710491    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:17:59.710501    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:17:59.715156    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:17:59.715162    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:17:59.749510    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:17:59.749524    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:17:59.760935    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:17:59.760948    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:18:02.274574    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:18:07.276703    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:18:07.277204    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:18:07.314476    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:18:07.314624    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:18:07.333646    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:18:07.333736    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:18:07.347212    5002 logs.go:276] 4 containers: [32e8ba33e96e 779ed74d601d 7bd7e96f8bf1 97d2b3004442]
	I0913 12:18:07.347291    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:18:07.359140    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:18:07.359208    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:18:07.369907    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:18:07.369971    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:18:07.379969    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:18:07.380048    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:18:07.396382    5002 logs.go:276] 0 containers: []
	W0913 12:18:07.396394    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:18:07.396464    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:18:07.406620    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:18:07.406635    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:18:07.406640    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:18:07.418412    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:18:07.418426    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:18:07.430432    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:18:07.430440    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:18:07.455451    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:18:07.455459    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:18:07.466659    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:18:07.466669    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:18:07.502892    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:18:07.502907    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:18:07.517809    5002 logs.go:123] Gathering logs for coredns [779ed74d601d] ...
	I0913 12:18:07.517819    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 779ed74d601d"
	I0913 12:18:07.530100    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:18:07.530111    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:18:07.548957    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:18:07.548967    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:18:07.560241    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:18:07.560249    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:18:07.575176    5002 logs.go:123] Gathering logs for coredns [32e8ba33e96e] ...
	I0913 12:18:07.575186    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e8ba33e96e"
	I0913 12:18:07.586880    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:18:07.586891    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:18:07.598840    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:18:07.598852    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:18:07.616876    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:18:07.616884    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:18:07.654895    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:18:07.654902    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:18:10.161500    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:18:15.164133    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:18:15.164698    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:18:15.209490    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:18:15.209632    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:18:15.229797    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:18:15.229926    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:18:15.247434    5002 logs.go:276] 4 containers: [32e8ba33e96e 779ed74d601d 7bd7e96f8bf1 97d2b3004442]
	I0913 12:18:15.247524    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:18:15.258758    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:18:15.258844    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:18:15.280512    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:18:15.280587    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:18:15.291123    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:18:15.291206    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:18:15.301877    5002 logs.go:276] 0 containers: []
	W0913 12:18:15.301891    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:18:15.301956    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:18:15.312082    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:18:15.312097    5002 logs.go:123] Gathering logs for coredns [779ed74d601d] ...
	I0913 12:18:15.312103    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 779ed74d601d"
	I0913 12:18:15.323872    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:18:15.323882    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:18:15.335809    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:18:15.335819    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:18:15.367287    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:18:15.367297    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:18:15.381632    5002 logs.go:123] Gathering logs for coredns [32e8ba33e96e] ...
	I0913 12:18:15.381642    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e8ba33e96e"
	I0913 12:18:15.393206    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:18:15.393216    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:18:15.409400    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:18:15.409411    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:18:15.426930    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:18:15.426940    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:18:15.441604    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:18:15.441612    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:18:15.453017    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:18:15.453027    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:18:15.477185    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:18:15.477193    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:18:15.489166    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:18:15.489176    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:18:15.527823    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:18:15.527833    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:18:15.532042    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:18:15.532048    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:18:15.565676    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:18:15.565687    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:18:18.089968    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:18:23.091440    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:18:23.091943    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:18:23.126378    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:18:23.126538    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:18:23.146848    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:18:23.146953    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:18:23.161955    5002 logs.go:276] 4 containers: [32e8ba33e96e 779ed74d601d 7bd7e96f8bf1 97d2b3004442]
	I0913 12:18:23.162044    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:18:23.174461    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:18:23.174529    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:18:23.187197    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:18:23.187269    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:18:23.199045    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:18:23.199121    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:18:23.210055    5002 logs.go:276] 0 containers: []
	W0913 12:18:23.210068    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:18:23.210139    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:18:23.220843    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:18:23.220860    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:18:23.220865    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:18:23.235075    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:18:23.235086    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:18:23.246873    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:18:23.246885    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:18:23.282097    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:18:23.282110    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:18:23.297020    5002 logs.go:123] Gathering logs for coredns [779ed74d601d] ...
	I0913 12:18:23.297033    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 779ed74d601d"
	I0913 12:18:23.310285    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:18:23.310294    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:18:23.325186    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:18:23.325196    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:18:23.350540    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:18:23.350547    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:18:23.355033    5002 logs.go:123] Gathering logs for coredns [32e8ba33e96e] ...
	I0913 12:18:23.355040    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e8ba33e96e"
	I0913 12:18:23.366168    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:18:23.366180    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:18:23.385607    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:18:23.385621    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:18:23.397139    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:18:23.397150    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:18:23.408860    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:18:23.408870    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:18:23.445539    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:18:23.445550    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:18:23.457623    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:18:23.457638    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:18:25.972711    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:18:30.974748    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:18:30.975024    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:18:31.001300    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:18:31.001433    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:18:31.018242    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:18:31.018350    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:18:31.032213    5002 logs.go:276] 4 containers: [32e8ba33e96e 779ed74d601d 7bd7e96f8bf1 97d2b3004442]
	I0913 12:18:31.032309    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:18:31.043974    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:18:31.044050    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:18:31.054314    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:18:31.054390    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:18:31.065091    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:18:31.065165    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:18:31.075615    5002 logs.go:276] 0 containers: []
	W0913 12:18:31.075631    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:18:31.075701    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:18:31.085850    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:18:31.085866    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:18:31.085872    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:18:31.101391    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:18:31.101406    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:18:31.117628    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:18:31.117637    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:18:31.135419    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:18:31.135428    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:18:31.139591    5002 logs.go:123] Gathering logs for coredns [779ed74d601d] ...
	I0913 12:18:31.139601    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 779ed74d601d"
	I0913 12:18:31.151320    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:18:31.151330    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:18:31.162382    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:18:31.162391    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:18:31.186241    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:18:31.186250    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:18:31.222319    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:18:31.222326    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:18:31.257172    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:18:31.257185    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:18:31.273095    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:18:31.273105    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:18:31.284966    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:18:31.284976    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:18:31.299660    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:18:31.299670    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:18:31.314947    5002 logs.go:123] Gathering logs for coredns [32e8ba33e96e] ...
	I0913 12:18:31.314958    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e8ba33e96e"
	I0913 12:18:31.326358    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:18:31.326368    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:18:33.837796    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:18:38.839780    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:18:38.839992    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:18:38.860637    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:18:38.860739    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:18:38.875897    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:18:38.875986    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:18:38.888007    5002 logs.go:276] 4 containers: [32e8ba33e96e 779ed74d601d 7bd7e96f8bf1 97d2b3004442]
	I0913 12:18:38.888112    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:18:38.898956    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:18:38.899029    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:18:38.909108    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:18:38.909183    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:18:38.919693    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:18:38.919759    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:18:38.930272    5002 logs.go:276] 0 containers: []
	W0913 12:18:38.930283    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:18:38.930350    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:18:38.941294    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:18:38.941309    5002 logs.go:123] Gathering logs for coredns [32e8ba33e96e] ...
	I0913 12:18:38.941314    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e8ba33e96e"
	I0913 12:18:38.957653    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:18:38.957665    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:18:38.969619    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:18:38.969633    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:18:38.981259    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:18:38.981270    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:18:39.017790    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:18:39.017797    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:18:39.052628    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:18:39.052643    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:18:39.070412    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:18:39.070420    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:18:39.081887    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:18:39.081898    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:18:39.107309    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:18:39.107316    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:18:39.111390    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:18:39.111396    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:18:39.126019    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:18:39.126031    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:18:39.148543    5002 logs.go:123] Gathering logs for coredns [779ed74d601d] ...
	I0913 12:18:39.148556    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 779ed74d601d"
	I0913 12:18:39.160168    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:18:39.160179    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:18:39.174248    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:18:39.174259    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:18:39.187399    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:18:39.187409    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:18:41.700639    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:18:46.706614    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:18:46.706781    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:18:46.717925    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:18:46.717993    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:18:46.728574    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:18:46.728639    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:18:46.739240    5002 logs.go:276] 4 containers: [32e8ba33e96e 779ed74d601d 7bd7e96f8bf1 97d2b3004442]
	I0913 12:18:46.739331    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:18:46.751265    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:18:46.751330    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:18:46.761385    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:18:46.761458    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:18:46.771860    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:18:46.771938    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:18:46.781553    5002 logs.go:276] 0 containers: []
	W0913 12:18:46.781563    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:18:46.781622    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:18:46.792312    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:18:46.792330    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:18:46.792335    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:18:46.803334    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:18:46.803346    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:18:46.814395    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:18:46.814405    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:18:46.848545    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:18:46.848554    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:18:46.862575    5002 logs.go:123] Gathering logs for coredns [779ed74d601d] ...
	I0913 12:18:46.862587    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 779ed74d601d"
	I0913 12:18:46.876914    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:18:46.876924    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:18:46.888911    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:18:46.888921    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:18:46.893758    5002 logs.go:123] Gathering logs for coredns [32e8ba33e96e] ...
	I0913 12:18:46.893765    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e8ba33e96e"
	I0913 12:18:46.905021    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:18:46.905035    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:18:46.919329    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:18:46.919339    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:18:46.943127    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:18:46.943136    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:18:46.957142    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:18:46.957155    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:18:46.968624    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:18:46.968636    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:18:46.985581    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:18:46.985594    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:18:46.997840    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:18:46.997848    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:18:49.540014    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:18:54.548313    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:18:54.548847    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:18:54.590467    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:18:54.590626    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:18:54.614466    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:18:54.614596    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:18:54.632145    5002 logs.go:276] 4 containers: [32e8ba33e96e 779ed74d601d 7bd7e96f8bf1 97d2b3004442]
	I0913 12:18:54.632239    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:18:54.644655    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:18:54.644733    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:18:54.655552    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:18:54.655637    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:18:54.670684    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:18:54.670762    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:18:54.685491    5002 logs.go:276] 0 containers: []
	W0913 12:18:54.685503    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:18:54.685573    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:18:54.697078    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:18:54.697095    5002 logs.go:123] Gathering logs for coredns [32e8ba33e96e] ...
	I0913 12:18:54.697101    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e8ba33e96e"
	I0913 12:18:54.709004    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:18:54.709014    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:18:54.734063    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:18:54.734070    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:18:54.746152    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:18:54.746162    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:18:54.750285    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:18:54.750292    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:18:54.785029    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:18:54.785040    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:18:54.796751    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:18:54.796763    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:18:54.810818    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:18:54.810830    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:18:54.827999    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:18:54.828009    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:18:54.864460    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:18:54.864467    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:18:54.876146    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:18:54.876154    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:18:54.890241    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:18:54.890249    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:18:54.901731    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:18:54.901745    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:18:54.922160    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:18:54.922172    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:18:54.935705    5002 logs.go:123] Gathering logs for coredns [779ed74d601d] ...
	I0913 12:18:54.935720    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 779ed74d601d"
	I0913 12:18:57.455138    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:19:02.461493    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:19:02.461592    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:19:02.474563    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:19:02.474651    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:19:02.487073    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:19:02.487153    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:19:02.501513    5002 logs.go:276] 4 containers: [32e8ba33e96e 779ed74d601d 7bd7e96f8bf1 97d2b3004442]
	I0913 12:19:02.501599    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:19:02.515856    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:19:02.515944    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:19:02.536299    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:19:02.536382    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:19:02.547320    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:19:02.547397    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:19:02.557981    5002 logs.go:276] 0 containers: []
	W0913 12:19:02.557995    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:19:02.558072    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:19:02.573718    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:19:02.573734    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:19:02.573740    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:19:02.611138    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:19:02.611147    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:19:02.615542    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:19:02.615552    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:19:02.629361    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:19:02.629374    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:19:02.641052    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:19:02.641064    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:19:02.656156    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:19:02.656166    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:19:02.692112    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:19:02.692124    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:19:02.704053    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:19:02.704066    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:19:02.727372    5002 logs.go:123] Gathering logs for coredns [32e8ba33e96e] ...
	I0913 12:19:02.727379    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e8ba33e96e"
	I0913 12:19:02.739218    5002 logs.go:123] Gathering logs for coredns [779ed74d601d] ...
	I0913 12:19:02.739228    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 779ed74d601d"
	I0913 12:19:02.755103    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:19:02.755114    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:19:02.771620    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:19:02.771632    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:19:02.783667    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:19:02.783680    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:19:02.801235    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:19:02.801246    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:19:02.815225    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:19:02.815235    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:19:05.330504    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:19:10.335452    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:19:10.335886    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:19:10.368166    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:19:10.368330    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:19:10.387780    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:19:10.387880    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:19:10.402220    5002 logs.go:276] 4 containers: [32e8ba33e96e 779ed74d601d 7bd7e96f8bf1 97d2b3004442]
	I0913 12:19:10.402308    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:19:10.413863    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:19:10.413938    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:19:10.425940    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:19:10.426023    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:19:10.436775    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:19:10.436853    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:19:10.447194    5002 logs.go:276] 0 containers: []
	W0913 12:19:10.447208    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:19:10.447273    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:19:10.458820    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:19:10.458836    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:19:10.458841    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:19:10.476778    5002 logs.go:123] Gathering logs for coredns [779ed74d601d] ...
	I0913 12:19:10.476790    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 779ed74d601d"
	I0913 12:19:10.490086    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:19:10.490096    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:19:10.501955    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:19:10.501964    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:19:10.519448    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:19:10.519457    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:19:10.531261    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:19:10.531272    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:19:10.555382    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:19:10.555390    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:19:10.593080    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:19:10.593089    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:19:10.628726    5002 logs.go:123] Gathering logs for coredns [32e8ba33e96e] ...
	I0913 12:19:10.628743    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e8ba33e96e"
	I0913 12:19:10.640900    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:19:10.640910    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:19:10.645652    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:19:10.645659    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:19:10.660823    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:19:10.660833    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:19:10.672767    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:19:10.672777    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:19:10.687399    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:19:10.687411    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:19:10.705648    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:19:10.705659    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:19:13.220017    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:19:18.223409    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:19:18.223667    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:19:18.248228    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:19:18.248370    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:19:18.265557    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:19:18.265655    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:19:18.278613    5002 logs.go:276] 4 containers: [32e8ba33e96e 779ed74d601d 7bd7e96f8bf1 97d2b3004442]
	I0913 12:19:18.278699    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:19:18.289833    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:19:18.289904    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:19:18.300310    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:19:18.300379    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:19:18.310825    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:19:18.310910    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:19:18.325825    5002 logs.go:276] 0 containers: []
	W0913 12:19:18.325838    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:19:18.325925    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:19:18.335939    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:19:18.335956    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:19:18.335961    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:19:18.340559    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:19:18.340565    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:19:18.363912    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:19:18.363918    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:19:18.377828    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:19:18.377836    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:19:18.389418    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:19:18.389431    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:19:18.425516    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:19:18.425525    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:19:18.459398    5002 logs.go:123] Gathering logs for coredns [779ed74d601d] ...
	I0913 12:19:18.459412    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 779ed74d601d"
	I0913 12:19:18.471670    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:19:18.471680    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:19:18.483577    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:19:18.483594    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:19:18.498195    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:19:18.498205    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:19:18.509492    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:19:18.509502    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:19:18.523606    5002 logs.go:123] Gathering logs for coredns [32e8ba33e96e] ...
	I0913 12:19:18.523616    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e8ba33e96e"
	I0913 12:19:18.535242    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:19:18.535252    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:19:18.547143    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:19:18.547154    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:19:18.564623    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:19:18.564632    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:19:21.078517    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:19:26.080434    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:19:26.080514    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:19:26.097599    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:19:26.097665    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:19:26.108938    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:19:26.109004    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:19:26.120343    5002 logs.go:276] 4 containers: [32e8ba33e96e 779ed74d601d 7bd7e96f8bf1 97d2b3004442]
	I0913 12:19:26.120424    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:19:26.131585    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:19:26.131646    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:19:26.144087    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:19:26.144154    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:19:26.155577    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:19:26.155648    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:19:26.165822    5002 logs.go:276] 0 containers: []
	W0913 12:19:26.165838    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:19:26.165905    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:19:26.177000    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:19:26.177016    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:19:26.177021    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:19:26.190232    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:19:26.190248    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:19:26.194891    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:19:26.194899    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:19:26.211863    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:19:26.211875    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:19:26.226499    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:19:26.226510    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:19:26.240823    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:19:26.240832    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:19:26.278258    5002 logs.go:123] Gathering logs for coredns [32e8ba33e96e] ...
	I0913 12:19:26.278274    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e8ba33e96e"
	I0913 12:19:26.291027    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:19:26.291036    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:19:26.303423    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:19:26.303435    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:19:26.340271    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:19:26.340282    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:19:26.355875    5002 logs.go:123] Gathering logs for coredns [779ed74d601d] ...
	I0913 12:19:26.355886    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 779ed74d601d"
	I0913 12:19:26.369034    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:19:26.369045    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:19:26.383662    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:19:26.383674    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:19:26.402550    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:19:26.402563    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:19:26.428941    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:19:26.428955    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:19:28.943357    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:19:33.944796    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:19:33.945169    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:19:33.981473    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:19:33.981603    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:19:34.003410    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:19:34.003526    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:19:34.018914    5002 logs.go:276] 4 containers: [32e8ba33e96e 779ed74d601d 7bd7e96f8bf1 97d2b3004442]
	I0913 12:19:34.019015    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:19:34.033667    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:19:34.033756    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:19:34.045693    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:19:34.045780    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:19:34.060912    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:19:34.061000    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:19:34.072729    5002 logs.go:276] 0 containers: []
	W0913 12:19:34.072743    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:19:34.072819    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:19:34.084883    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:19:34.084902    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:19:34.084908    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:19:34.098659    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:19:34.098673    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:19:34.124888    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:19:34.124913    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:19:34.138838    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:19:34.138852    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:19:34.143835    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:19:34.143848    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:19:34.159669    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:19:34.159677    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:19:34.171812    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:19:34.171822    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:19:34.183347    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:19:34.183358    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:19:34.221445    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:19:34.221459    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:19:34.255568    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:19:34.255584    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:19:34.275519    5002 logs.go:123] Gathering logs for coredns [32e8ba33e96e] ...
	I0913 12:19:34.275529    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e8ba33e96e"
	I0913 12:19:34.286824    5002 logs.go:123] Gathering logs for coredns [779ed74d601d] ...
	I0913 12:19:34.286835    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 779ed74d601d"
	I0913 12:19:34.298924    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:19:34.298934    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:19:34.313605    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:19:34.313617    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:19:34.325310    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:19:34.325319    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:19:36.843236    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:19:41.844333    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:19:41.844901    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 12:19:41.888667    5002 logs.go:276] 1 containers: [e993937e22f4]
	I0913 12:19:41.888830    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 12:19:41.910010    5002 logs.go:276] 1 containers: [17649600e2a2]
	I0913 12:19:41.910148    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 12:19:41.925550    5002 logs.go:276] 4 containers: [32e8ba33e96e 779ed74d601d 7bd7e96f8bf1 97d2b3004442]
	I0913 12:19:41.925648    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 12:19:41.938119    5002 logs.go:276] 1 containers: [63d308aafdef]
	I0913 12:19:41.938199    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 12:19:41.949402    5002 logs.go:276] 1 containers: [65b3e2bfdcbf]
	I0913 12:19:41.949480    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 12:19:41.960701    5002 logs.go:276] 1 containers: [b1c22515c53e]
	I0913 12:19:41.960779    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 12:19:41.971787    5002 logs.go:276] 0 containers: []
	W0913 12:19:41.971798    5002 logs.go:278] No container was found matching "kindnet"
	I0913 12:19:41.971866    5002 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 12:19:41.982364    5002 logs.go:276] 1 containers: [b1243166faa3]
	I0913 12:19:41.982385    5002 logs.go:123] Gathering logs for dmesg ...
	I0913 12:19:41.982391    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 12:19:41.987451    5002 logs.go:123] Gathering logs for coredns [779ed74d601d] ...
	I0913 12:19:41.987458    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 779ed74d601d"
	I0913 12:19:41.999618    5002 logs.go:123] Gathering logs for coredns [7bd7e96f8bf1] ...
	I0913 12:19:41.999632    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bd7e96f8bf1"
	I0913 12:19:42.011926    5002 logs.go:123] Gathering logs for kube-controller-manager [b1c22515c53e] ...
	I0913 12:19:42.011942    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1c22515c53e"
	I0913 12:19:42.029632    5002 logs.go:123] Gathering logs for container status ...
	I0913 12:19:42.029644    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 12:19:42.041243    5002 logs.go:123] Gathering logs for describe nodes ...
	I0913 12:19:42.041256    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 12:19:42.083038    5002 logs.go:123] Gathering logs for etcd [17649600e2a2] ...
	I0913 12:19:42.083049    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17649600e2a2"
	I0913 12:19:42.097912    5002 logs.go:123] Gathering logs for kube-apiserver [e993937e22f4] ...
	I0913 12:19:42.097923    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e993937e22f4"
	I0913 12:19:42.115344    5002 logs.go:123] Gathering logs for coredns [32e8ba33e96e] ...
	I0913 12:19:42.115354    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32e8ba33e96e"
	I0913 12:19:42.126967    5002 logs.go:123] Gathering logs for coredns [97d2b3004442] ...
	I0913 12:19:42.126978    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97d2b3004442"
	I0913 12:19:42.138910    5002 logs.go:123] Gathering logs for Docker ...
	I0913 12:19:42.138919    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 12:19:42.163421    5002 logs.go:123] Gathering logs for kubelet ...
	I0913 12:19:42.163428    5002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 12:19:42.201363    5002 logs.go:123] Gathering logs for kube-scheduler [63d308aafdef] ...
	I0913 12:19:42.201371    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63d308aafdef"
	I0913 12:19:42.216015    5002 logs.go:123] Gathering logs for kube-proxy [65b3e2bfdcbf] ...
	I0913 12:19:42.216030    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65b3e2bfdcbf"
	I0913 12:19:42.230421    5002 logs.go:123] Gathering logs for storage-provisioner [b1243166faa3] ...
	I0913 12:19:42.230431    5002 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1243166faa3"
	I0913 12:19:44.742207    5002 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 12:19:49.745226    5002 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 12:19:49.752623    5002 out.go:201] 
	W0913 12:19:49.760614    5002 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0913 12:19:49.760622    5002 out.go:270] * 
	* 
	W0913 12:19:49.761026    5002 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:19:49.768568    5002 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-748000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (575.83s)

                                                
                                    
x
+
TestPause/serial/Start (10.15s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-552000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
E0913 12:16:56.381785    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-552000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.086713833s)

                                                
                                                
-- stdout --
	* [pause-552000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-552000" primary control-plane node in "pause-552000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-552000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-552000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-552000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-552000 -n pause-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-552000 -n pause-552000: exit status 7 (62.45275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-552000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-114000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-114000 --driver=qemu2 : exit status 80 (9.989755667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-114000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-114000" primary control-plane node in "NoKubernetes-114000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-114000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-114000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-114000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-114000 -n NoKubernetes-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-114000 -n NoKubernetes-114000: exit status 7 (31.55475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-114000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-114000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-114000 --no-kubernetes --driver=qemu2 : exit status 80 (5.244971416s)

                                                
                                                
-- stdout --
	* [NoKubernetes-114000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-114000
	* Restarting existing qemu2 VM for "NoKubernetes-114000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-114000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-114000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-114000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-114000 -n NoKubernetes-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-114000 -n NoKubernetes-114000: exit status 7 (64.025416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-114000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-114000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-114000 --no-kubernetes --driver=qemu2 : exit status 80 (5.254379292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-114000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-114000
	* Restarting existing qemu2 VM for "NoKubernetes-114000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-114000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-114000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-114000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-114000 -n NoKubernetes-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-114000 -n NoKubernetes-114000: exit status 7 (60.705708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-114000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-114000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-114000 --driver=qemu2 : exit status 80 (5.2778175s)

                                                
                                                
-- stdout --
	* [NoKubernetes-114000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-114000
	* Restarting existing qemu2 VM for "NoKubernetes-114000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-114000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-114000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-114000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-114000 -n NoKubernetes-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-114000 -n NoKubernetes-114000: exit status 7 (58.623792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-114000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.933351291s)

                                                
                                                
-- stdout --
	* [auto-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-151000" primary control-plane node in "auto-151000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-151000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:18:01.845328    5209 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:18:01.845449    5209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:18:01.845452    5209 out.go:358] Setting ErrFile to fd 2...
	I0913 12:18:01.845455    5209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:18:01.845578    5209 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:18:01.846634    5209 out.go:352] Setting JSON to false
	I0913 12:18:01.862696    5209 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4644,"bootTime":1726250437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:18:01.862764    5209 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:18:01.867450    5209 out.go:177] * [auto-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:18:01.875196    5209 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:18:01.875231    5209 notify.go:220] Checking for updates...
	I0913 12:18:01.881361    5209 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:18:01.882868    5209 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:18:01.886342    5209 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:18:01.889339    5209 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:18:01.892338    5209 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:18:01.895724    5209 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:18:01.895790    5209 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:18:01.895840    5209 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:18:01.900319    5209 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:18:01.907345    5209 start.go:297] selected driver: qemu2
	I0913 12:18:01.907353    5209 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:18:01.907361    5209 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:18:01.909414    5209 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:18:01.912358    5209 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:18:01.915443    5209 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:18:01.915457    5209 cni.go:84] Creating CNI manager for ""
	I0913 12:18:01.915477    5209 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:18:01.915484    5209 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 12:18:01.915512    5209 start.go:340] cluster config:
	{Name:auto-151000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:18:01.918817    5209 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:18:01.926313    5209 out.go:177] * Starting "auto-151000" primary control-plane node in "auto-151000" cluster
	I0913 12:18:01.934314    5209 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:18:01.934331    5209 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:18:01.934342    5209 cache.go:56] Caching tarball of preloaded images
	I0913 12:18:01.934419    5209 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:18:01.934425    5209 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:18:01.934479    5209 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/auto-151000/config.json ...
	I0913 12:18:01.934494    5209 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/auto-151000/config.json: {Name:mk617d5289a60885a5778cf71e0c80b9b74c2750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:18:01.934766    5209 start.go:360] acquireMachinesLock for auto-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:18:01.934796    5209 start.go:364] duration metric: took 25.333µs to acquireMachinesLock for "auto-151000"
	I0913 12:18:01.934805    5209 start.go:93] Provisioning new machine with config: &{Name:auto-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:18:01.934830    5209 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:18:01.938377    5209 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:18:01.953931    5209 start.go:159] libmachine.API.Create for "auto-151000" (driver="qemu2")
	I0913 12:18:01.953953    5209 client.go:168] LocalClient.Create starting
	I0913 12:18:01.954006    5209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:18:01.954038    5209 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:01.954047    5209 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:01.954084    5209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:18:01.954110    5209 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:01.954117    5209 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:01.954540    5209 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:18:02.113229    5209 main.go:141] libmachine: Creating SSH key...
	I0913 12:18:02.341300    5209 main.go:141] libmachine: Creating Disk image...
	I0913 12:18:02.341308    5209 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:18:02.341563    5209 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/disk.qcow2
	I0913 12:18:02.351103    5209 main.go:141] libmachine: STDOUT: 
	I0913 12:18:02.351120    5209 main.go:141] libmachine: STDERR: 
	I0913 12:18:02.351183    5209 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/disk.qcow2 +20000M
	I0913 12:18:02.359091    5209 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:18:02.359111    5209 main.go:141] libmachine: STDERR: 
	I0913 12:18:02.359126    5209 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/disk.qcow2
	I0913 12:18:02.359132    5209 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:18:02.359142    5209 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:18:02.359170    5209 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:ca:99:70:6f:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/disk.qcow2
	I0913 12:18:02.360704    5209 main.go:141] libmachine: STDOUT: 
	I0913 12:18:02.360719    5209 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:18:02.360743    5209 client.go:171] duration metric: took 406.801084ms to LocalClient.Create
	I0913 12:18:04.362872    5209 start.go:128] duration metric: took 2.428108417s to createHost
	I0913 12:18:04.362962    5209 start.go:83] releasing machines lock for "auto-151000", held for 2.4282525s
	W0913 12:18:04.363040    5209 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:18:04.369754    5209 out.go:177] * Deleting "auto-151000" in qemu2 ...
	W0913 12:18:04.405174    5209 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:18:04.405209    5209 start.go:729] Will try again in 5 seconds ...
	I0913 12:18:09.407348    5209 start.go:360] acquireMachinesLock for auto-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:18:09.407945    5209 start.go:364] duration metric: took 489.791µs to acquireMachinesLock for "auto-151000"
	I0913 12:18:09.408152    5209 start.go:93] Provisioning new machine with config: &{Name:auto-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:18:09.408420    5209 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:18:09.414153    5209 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:18:09.462416    5209 start.go:159] libmachine.API.Create for "auto-151000" (driver="qemu2")
	I0913 12:18:09.462483    5209 client.go:168] LocalClient.Create starting
	I0913 12:18:09.462605    5209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:18:09.462678    5209 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:09.462695    5209 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:09.462776    5209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:18:09.462820    5209 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:09.462832    5209 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:09.463529    5209 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:18:09.628565    5209 main.go:141] libmachine: Creating SSH key...
	I0913 12:18:09.678402    5209 main.go:141] libmachine: Creating Disk image...
	I0913 12:18:09.678409    5209 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:18:09.678647    5209 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/disk.qcow2
	I0913 12:18:09.687893    5209 main.go:141] libmachine: STDOUT: 
	I0913 12:18:09.687911    5209 main.go:141] libmachine: STDERR: 
	I0913 12:18:09.687973    5209 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/disk.qcow2 +20000M
	I0913 12:18:09.695908    5209 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:18:09.695923    5209 main.go:141] libmachine: STDERR: 
	I0913 12:18:09.695936    5209 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/disk.qcow2
	I0913 12:18:09.695941    5209 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:18:09.695950    5209 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:18:09.695981    5209 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:ac:ea:cd:12:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/auto-151000/disk.qcow2
	I0913 12:18:09.697605    5209 main.go:141] libmachine: STDOUT: 
	I0913 12:18:09.697619    5209 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:18:09.697632    5209 client.go:171] duration metric: took 235.152792ms to LocalClient.Create
	I0913 12:18:11.699766    5209 start.go:128] duration metric: took 2.291403708s to createHost
	I0913 12:18:11.699838    5209 start.go:83] releasing machines lock for "auto-151000", held for 2.291953625s
	W0913 12:18:11.700479    5209 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:18:11.714190    5209 out.go:201] 
	W0913 12:18:11.719302    5209 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:18:11.719352    5209 out.go:270] * 
	* 
	W0913 12:18:11.721274    5209 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:18:11.736171    5209 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.812709334s)

                                                
                                                
-- stdout --
	* [flannel-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-151000" primary control-plane node in "flannel-151000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-151000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:18:13.903375    5321 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:18:13.903491    5321 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:18:13.903494    5321 out.go:358] Setting ErrFile to fd 2...
	I0913 12:18:13.903496    5321 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:18:13.903660    5321 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:18:13.904710    5321 out.go:352] Setting JSON to false
	I0913 12:18:13.920890    5321 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4656,"bootTime":1726250437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:18:13.920993    5321 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:18:13.926835    5321 out.go:177] * [flannel-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:18:13.935618    5321 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:18:13.935653    5321 notify.go:220] Checking for updates...
	I0913 12:18:13.941158    5321 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:18:13.944585    5321 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:18:13.947569    5321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:18:13.950653    5321 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:18:13.953582    5321 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:18:13.956878    5321 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:18:13.956939    5321 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:18:13.956985    5321 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:18:13.961614    5321 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:18:13.968653    5321 start.go:297] selected driver: qemu2
	I0913 12:18:13.968660    5321 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:18:13.968668    5321 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:18:13.970886    5321 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:18:13.973597    5321 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:18:13.976681    5321 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:18:13.976707    5321 cni.go:84] Creating CNI manager for "flannel"
	I0913 12:18:13.976710    5321 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0913 12:18:13.976746    5321 start.go:340] cluster config:
	{Name:flannel-151000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:18:13.980232    5321 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:18:13.987460    5321 out.go:177] * Starting "flannel-151000" primary control-plane node in "flannel-151000" cluster
	I0913 12:18:13.991697    5321 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:18:13.991710    5321 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:18:13.991723    5321 cache.go:56] Caching tarball of preloaded images
	I0913 12:18:13.991781    5321 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:18:13.991786    5321 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:18:13.991837    5321 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/flannel-151000/config.json ...
	I0913 12:18:13.991847    5321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/flannel-151000/config.json: {Name:mk41191b93e2359e6008ea567b675a02379fe894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:18:13.992066    5321 start.go:360] acquireMachinesLock for flannel-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:18:13.992097    5321 start.go:364] duration metric: took 25.708µs to acquireMachinesLock for "flannel-151000"
	I0913 12:18:13.992110    5321 start.go:93] Provisioning new machine with config: &{Name:flannel-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:18:13.992140    5321 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:18:13.999656    5321 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:18:14.015508    5321 start.go:159] libmachine.API.Create for "flannel-151000" (driver="qemu2")
	I0913 12:18:14.015547    5321 client.go:168] LocalClient.Create starting
	I0913 12:18:14.015604    5321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:18:14.015637    5321 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:14.015646    5321 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:14.015699    5321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:18:14.015722    5321 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:14.015731    5321 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:14.016057    5321 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:18:14.173079    5321 main.go:141] libmachine: Creating SSH key...
	I0913 12:18:14.248746    5321 main.go:141] libmachine: Creating Disk image...
	I0913 12:18:14.248755    5321 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:18:14.248983    5321 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/disk.qcow2
	I0913 12:18:14.259264    5321 main.go:141] libmachine: STDOUT: 
	I0913 12:18:14.259300    5321 main.go:141] libmachine: STDERR: 
	I0913 12:18:14.259376    5321 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/disk.qcow2 +20000M
	I0913 12:18:14.267941    5321 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:18:14.267959    5321 main.go:141] libmachine: STDERR: 
	I0913 12:18:14.267977    5321 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/disk.qcow2
	I0913 12:18:14.267982    5321 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:18:14.267993    5321 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:18:14.268023    5321 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:86:0d:80:46:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/disk.qcow2
	I0913 12:18:14.269716    5321 main.go:141] libmachine: STDOUT: 
	I0913 12:18:14.269732    5321 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:18:14.269755    5321 client.go:171] duration metric: took 254.212166ms to LocalClient.Create
	I0913 12:18:16.271793    5321 start.go:128] duration metric: took 2.27972625s to createHost
	I0913 12:18:16.271830    5321 start.go:83] releasing machines lock for "flannel-151000", held for 2.279817167s
	W0913 12:18:16.271874    5321 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:18:16.289942    5321 out.go:177] * Deleting "flannel-151000" in qemu2 ...
	W0913 12:18:16.309702    5321 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:18:16.309715    5321 start.go:729] Will try again in 5 seconds ...
	I0913 12:18:21.311708    5321 start.go:360] acquireMachinesLock for flannel-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:18:21.312431    5321 start.go:364] duration metric: took 558.292µs to acquireMachinesLock for "flannel-151000"
	I0913 12:18:21.312528    5321 start.go:93] Provisioning new machine with config: &{Name:flannel-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:18:21.312915    5321 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:18:21.320596    5321 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:18:21.372536    5321 start.go:159] libmachine.API.Create for "flannel-151000" (driver="qemu2")
	I0913 12:18:21.372595    5321 client.go:168] LocalClient.Create starting
	I0913 12:18:21.372724    5321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:18:21.372800    5321 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:21.372824    5321 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:21.372884    5321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:18:21.372930    5321 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:21.372944    5321 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:21.373522    5321 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:18:21.542237    5321 main.go:141] libmachine: Creating SSH key...
	I0913 12:18:21.626862    5321 main.go:141] libmachine: Creating Disk image...
	I0913 12:18:21.626869    5321 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:18:21.627082    5321 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/disk.qcow2
	I0913 12:18:21.636244    5321 main.go:141] libmachine: STDOUT: 
	I0913 12:18:21.636263    5321 main.go:141] libmachine: STDERR: 
	I0913 12:18:21.636323    5321 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/disk.qcow2 +20000M
	I0913 12:18:21.644443    5321 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:18:21.644458    5321 main.go:141] libmachine: STDERR: 
	I0913 12:18:21.644469    5321 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/disk.qcow2
	I0913 12:18:21.644473    5321 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:18:21.644480    5321 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:18:21.644513    5321 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:e1:fc:a4:22:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/flannel-151000/disk.qcow2
	I0913 12:18:21.646069    5321 main.go:141] libmachine: STDOUT: 
	I0913 12:18:21.646083    5321 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:18:21.646095    5321 client.go:171] duration metric: took 273.503958ms to LocalClient.Create
	I0913 12:18:23.648138    5321 start.go:128] duration metric: took 2.335297542s to createHost
	I0913 12:18:23.648166    5321 start.go:83] releasing machines lock for "flannel-151000", held for 2.335787125s
	W0913 12:18:23.648343    5321 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:18:23.662726    5321 out.go:201] 
	W0913 12:18:23.667800    5321 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:18:23.667822    5321 out.go:270] * 
	* 
	W0913 12:18:23.668849    5321 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:18:23.677764    5321 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.958094125s)

                                                
                                                
-- stdout --
	* [kindnet-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-151000" primary control-plane node in "kindnet-151000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-151000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:18:25.981858    5439 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:18:25.981979    5439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:18:25.981983    5439 out.go:358] Setting ErrFile to fd 2...
	I0913 12:18:25.981985    5439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:18:25.982129    5439 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:18:25.983214    5439 out.go:352] Setting JSON to false
	I0913 12:18:25.999704    5439 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4668,"bootTime":1726250437,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:18:25.999784    5439 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:18:26.005102    5439 out.go:177] * [kindnet-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:18:26.013044    5439 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:18:26.013101    5439 notify.go:220] Checking for updates...
	I0913 12:18:26.021924    5439 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:18:26.025025    5439 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:18:26.028066    5439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:18:26.029501    5439 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:18:26.033024    5439 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:18:26.036474    5439 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:18:26.036546    5439 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:18:26.036591    5439 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:18:26.040829    5439 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:18:26.048039    5439 start.go:297] selected driver: qemu2
	I0913 12:18:26.048044    5439 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:18:26.048050    5439 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:18:26.050387    5439 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:18:26.054060    5439 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:18:26.057087    5439 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:18:26.057103    5439 cni.go:84] Creating CNI manager for "kindnet"
	I0913 12:18:26.057106    5439 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0913 12:18:26.057135    5439 start.go:340] cluster config:
	{Name:kindnet-151000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:18:26.060861    5439 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:18:26.066917    5439 out.go:177] * Starting "kindnet-151000" primary control-plane node in "kindnet-151000" cluster
	I0913 12:18:26.071011    5439 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:18:26.071028    5439 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:18:26.071034    5439 cache.go:56] Caching tarball of preloaded images
	I0913 12:18:26.071101    5439 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:18:26.071109    5439 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:18:26.071175    5439 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/kindnet-151000/config.json ...
	I0913 12:18:26.071187    5439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/kindnet-151000/config.json: {Name:mkbcc9e4fdd7522c67f7e22ecae980aa15cbf019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:18:26.071503    5439 start.go:360] acquireMachinesLock for kindnet-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:18:26.071542    5439 start.go:364] duration metric: took 32.708µs to acquireMachinesLock for "kindnet-151000"
	I0913 12:18:26.071554    5439 start.go:93] Provisioning new machine with config: &{Name:kindnet-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:18:26.071579    5439 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:18:26.080019    5439 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:18:26.097004    5439 start.go:159] libmachine.API.Create for "kindnet-151000" (driver="qemu2")
	I0913 12:18:26.097034    5439 client.go:168] LocalClient.Create starting
	I0913 12:18:26.097095    5439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:18:26.097125    5439 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:26.097133    5439 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:26.097170    5439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:18:26.097198    5439 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:26.097207    5439 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:26.097582    5439 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:18:26.255308    5439 main.go:141] libmachine: Creating SSH key...
	I0913 12:18:26.402382    5439 main.go:141] libmachine: Creating Disk image...
	I0913 12:18:26.402395    5439 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:18:26.402632    5439 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/disk.qcow2
	I0913 12:18:26.412311    5439 main.go:141] libmachine: STDOUT: 
	I0913 12:18:26.412328    5439 main.go:141] libmachine: STDERR: 
	I0913 12:18:26.412380    5439 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/disk.qcow2 +20000M
	I0913 12:18:26.420431    5439 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:18:26.420447    5439 main.go:141] libmachine: STDERR: 
	I0913 12:18:26.420479    5439 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/disk.qcow2
	I0913 12:18:26.420483    5439 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:18:26.420494    5439 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:18:26.420529    5439 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d7:57:b2:86:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/disk.qcow2
	I0913 12:18:26.422181    5439 main.go:141] libmachine: STDOUT: 
	I0913 12:18:26.422198    5439 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:18:26.422219    5439 client.go:171] duration metric: took 325.193042ms to LocalClient.Create
	I0913 12:18:28.424363    5439 start.go:128] duration metric: took 2.352843792s to createHost
	I0913 12:18:28.424493    5439 start.go:83] releasing machines lock for "kindnet-151000", held for 2.353032416s
	W0913 12:18:28.424580    5439 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:18:28.435863    5439 out.go:177] * Deleting "kindnet-151000" in qemu2 ...
	W0913 12:18:28.468536    5439 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:18:28.468563    5439 start.go:729] Will try again in 5 seconds ...
	I0913 12:18:33.470694    5439 start.go:360] acquireMachinesLock for kindnet-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:18:33.471300    5439 start.go:364] duration metric: took 465.208µs to acquireMachinesLock for "kindnet-151000"
	I0913 12:18:33.471443    5439 start.go:93] Provisioning new machine with config: &{Name:kindnet-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:18:33.471729    5439 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:18:33.480169    5439 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:18:33.530580    5439 start.go:159] libmachine.API.Create for "kindnet-151000" (driver="qemu2")
	I0913 12:18:33.530627    5439 client.go:168] LocalClient.Create starting
	I0913 12:18:33.530764    5439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:18:33.530839    5439 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:33.530858    5439 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:33.530916    5439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:18:33.530960    5439 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:33.530973    5439 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:33.531532    5439 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:18:33.699056    5439 main.go:141] libmachine: Creating SSH key...
	I0913 12:18:33.837455    5439 main.go:141] libmachine: Creating Disk image...
	I0913 12:18:33.837463    5439 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:18:33.837693    5439 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/disk.qcow2
	I0913 12:18:33.847249    5439 main.go:141] libmachine: STDOUT: 
	I0913 12:18:33.847272    5439 main.go:141] libmachine: STDERR: 
	I0913 12:18:33.847325    5439 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/disk.qcow2 +20000M
	I0913 12:18:33.855195    5439 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:18:33.855212    5439 main.go:141] libmachine: STDERR: 
	I0913 12:18:33.855227    5439 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/disk.qcow2
	I0913 12:18:33.855232    5439 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:18:33.855246    5439 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:18:33.855285    5439 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:2d:3a:65:d1:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kindnet-151000/disk.qcow2
	I0913 12:18:33.856932    5439 main.go:141] libmachine: STDOUT: 
	I0913 12:18:33.856948    5439 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:18:33.856963    5439 client.go:171] duration metric: took 326.342959ms to LocalClient.Create
	I0913 12:18:35.859106    5439 start.go:128] duration metric: took 2.387429792s to createHost
	I0913 12:18:35.859205    5439 start.go:83] releasing machines lock for "kindnet-151000", held for 2.387973917s
	W0913 12:18:35.859569    5439 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:18:35.875233    5439 out.go:201] 
	W0913 12:18:35.879508    5439 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:18:35.879535    5439 out.go:270] * 
	* 
	W0913 12:18:35.882294    5439 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:18:35.896428    5439 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.817727167s)

                                                
                                                
-- stdout --
	* [enable-default-cni-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-151000" primary control-plane node in "enable-default-cni-151000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-151000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:18:38.236800    5553 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:18:38.236933    5553 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:18:38.236936    5553 out.go:358] Setting ErrFile to fd 2...
	I0913 12:18:38.236938    5553 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:18:38.237075    5553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:18:38.238146    5553 out.go:352] Setting JSON to false
	I0913 12:18:38.254656    5553 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4681,"bootTime":1726250437,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:18:38.254726    5553 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:18:38.261374    5553 out.go:177] * [enable-default-cni-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:18:38.269321    5553 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:18:38.269408    5553 notify.go:220] Checking for updates...
	I0913 12:18:38.276253    5553 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:18:38.279179    5553 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:18:38.282277    5553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:18:38.285335    5553 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:18:38.288351    5553 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:18:38.291538    5553 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:18:38.291608    5553 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:18:38.291659    5553 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:18:38.296272    5553 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:18:38.303251    5553 start.go:297] selected driver: qemu2
	I0913 12:18:38.303256    5553 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:18:38.303262    5553 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:18:38.305506    5553 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:18:38.309238    5553 out.go:177] * Automatically selected the socket_vmnet network
	E0913 12:18:38.312347    5553 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0913 12:18:38.312359    5553 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:18:38.312377    5553 cni.go:84] Creating CNI manager for "bridge"
	I0913 12:18:38.312388    5553 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 12:18:38.312426    5553 start.go:340] cluster config:
	{Name:enable-default-cni-151000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:18:38.315972    5553 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:18:38.323237    5553 out.go:177] * Starting "enable-default-cni-151000" primary control-plane node in "enable-default-cni-151000" cluster
	I0913 12:18:38.327220    5553 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:18:38.327237    5553 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:18:38.327254    5553 cache.go:56] Caching tarball of preloaded images
	I0913 12:18:38.327320    5553 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:18:38.327325    5553 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:18:38.327410    5553 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/enable-default-cni-151000/config.json ...
	I0913 12:18:38.327422    5553 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/enable-default-cni-151000/config.json: {Name:mk54022e95af8934338b42aaca42acf288e29fc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:18:38.327627    5553 start.go:360] acquireMachinesLock for enable-default-cni-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:18:38.327661    5553 start.go:364] duration metric: took 26.167µs to acquireMachinesLock for "enable-default-cni-151000"
	I0913 12:18:38.327671    5553 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:18:38.327696    5553 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:18:38.335218    5553 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:18:38.351906    5553 start.go:159] libmachine.API.Create for "enable-default-cni-151000" (driver="qemu2")
	I0913 12:18:38.351933    5553 client.go:168] LocalClient.Create starting
	I0913 12:18:38.351989    5553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:18:38.352020    5553 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:38.352030    5553 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:38.352067    5553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:18:38.352090    5553 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:38.352104    5553 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:38.352437    5553 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:18:38.511405    5553 main.go:141] libmachine: Creating SSH key...
	I0913 12:18:38.628241    5553 main.go:141] libmachine: Creating Disk image...
	I0913 12:18:38.628248    5553 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:18:38.628470    5553 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/disk.qcow2
	I0913 12:18:38.637906    5553 main.go:141] libmachine: STDOUT: 
	I0913 12:18:38.637933    5553 main.go:141] libmachine: STDERR: 
	I0913 12:18:38.637988    5553 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/disk.qcow2 +20000M
	I0913 12:18:38.645760    5553 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:18:38.645773    5553 main.go:141] libmachine: STDERR: 
	I0913 12:18:38.645786    5553 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/disk.qcow2
	I0913 12:18:38.645795    5553 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:18:38.645807    5553 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:18:38.645840    5553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:14:1d:4a:73:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/disk.qcow2
	I0913 12:18:38.647473    5553 main.go:141] libmachine: STDOUT: 
	I0913 12:18:38.647487    5553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:18:38.647515    5553 client.go:171] duration metric: took 295.589ms to LocalClient.Create
	I0913 12:18:40.649663    5553 start.go:128] duration metric: took 2.322027375s to createHost
	I0913 12:18:40.649743    5553 start.go:83] releasing machines lock for "enable-default-cni-151000", held for 2.322164083s
	W0913 12:18:40.649796    5553 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:18:40.666077    5553 out.go:177] * Deleting "enable-default-cni-151000" in qemu2 ...
	W0913 12:18:40.696370    5553 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:18:40.696419    5553 start.go:729] Will try again in 5 seconds ...
	I0913 12:18:45.699998    5553 start.go:360] acquireMachinesLock for enable-default-cni-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:18:45.700504    5553 start.go:364] duration metric: took 428.167µs to acquireMachinesLock for "enable-default-cni-151000"
	I0913 12:18:45.700638    5553 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:18:45.700847    5553 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:18:45.706534    5553 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:18:45.755211    5553 start.go:159] libmachine.API.Create for "enable-default-cni-151000" (driver="qemu2")
	I0913 12:18:45.755280    5553 client.go:168] LocalClient.Create starting
	I0913 12:18:45.755397    5553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:18:45.755464    5553 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:45.755477    5553 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:45.755534    5553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:18:45.755577    5553 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:45.755599    5553 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:45.756229    5553 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:18:45.922705    5553 main.go:141] libmachine: Creating SSH key...
	I0913 12:18:45.962121    5553 main.go:141] libmachine: Creating Disk image...
	I0913 12:18:45.962132    5553 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:18:45.962345    5553 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/disk.qcow2
	I0913 12:18:45.971552    5553 main.go:141] libmachine: STDOUT: 
	I0913 12:18:45.971577    5553 main.go:141] libmachine: STDERR: 
	I0913 12:18:45.971638    5553 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/disk.qcow2 +20000M
	I0913 12:18:45.979828    5553 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:18:45.979845    5553 main.go:141] libmachine: STDERR: 
	I0913 12:18:45.979856    5553 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/disk.qcow2
	I0913 12:18:45.979863    5553 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:18:45.979875    5553 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:18:45.979903    5553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:aa:ed:ef:49:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/enable-default-cni-151000/disk.qcow2
	I0913 12:18:45.981604    5553 main.go:141] libmachine: STDOUT: 
	I0913 12:18:45.981622    5553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:18:45.981635    5553 client.go:171] duration metric: took 225.9385ms to LocalClient.Create
	I0913 12:18:47.987317    5553 start.go:128] duration metric: took 2.282519959s to createHost
	I0913 12:18:47.987403    5553 start.go:83] releasing machines lock for "enable-default-cni-151000", held for 2.282961959s
	W0913 12:18:47.987767    5553 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:18:47.998583    5553 out.go:201] 
	W0913 12:18:48.006672    5553 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:18:48.006702    5553 out.go:270] * 
	* 
	W0913 12:18:48.009203    5553 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:18:48.018641    5553 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
E0913 12:18:53.299416    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
E0913 12:18:56.999278    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.817975083s)

                                                
                                                
-- stdout --
	* [bridge-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-151000" primary control-plane node in "bridge-151000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-151000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:18:50.240220    5666 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:18:50.240358    5666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:18:50.240362    5666 out.go:358] Setting ErrFile to fd 2...
	I0913 12:18:50.240364    5666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:18:50.240488    5666 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:18:50.241625    5666 out.go:352] Setting JSON to false
	I0913 12:18:50.257914    5666 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4693,"bootTime":1726250437,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:18:50.257980    5666 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:18:50.264813    5666 out.go:177] * [bridge-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:18:50.272738    5666 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:18:50.272767    5666 notify.go:220] Checking for updates...
	I0913 12:18:50.279689    5666 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:18:50.282735    5666 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:18:50.285743    5666 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:18:50.288635    5666 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:18:50.291704    5666 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:18:50.295056    5666 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:18:50.295121    5666 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:18:50.295161    5666 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:18:50.298666    5666 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:18:50.305696    5666 start.go:297] selected driver: qemu2
	I0913 12:18:50.305701    5666 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:18:50.305706    5666 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:18:50.307925    5666 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:18:50.310686    5666 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:18:50.313834    5666 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:18:50.313855    5666 cni.go:84] Creating CNI manager for "bridge"
	I0913 12:18:50.313860    5666 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 12:18:50.313898    5666 start.go:340] cluster config:
	{Name:bridge-151000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:18:50.317707    5666 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:18:50.324738    5666 out.go:177] * Starting "bridge-151000" primary control-plane node in "bridge-151000" cluster
	I0913 12:18:50.328731    5666 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:18:50.328746    5666 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:18:50.328760    5666 cache.go:56] Caching tarball of preloaded images
	I0913 12:18:50.328827    5666 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:18:50.328833    5666 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:18:50.328887    5666 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/bridge-151000/config.json ...
	I0913 12:18:50.328899    5666 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/bridge-151000/config.json: {Name:mkd636495bc7ba5944ed3ebfb16809f0bb4e0b3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:18:50.329109    5666 start.go:360] acquireMachinesLock for bridge-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:18:50.329142    5666 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "bridge-151000"
	I0913 12:18:50.329158    5666 start.go:93] Provisioning new machine with config: &{Name:bridge-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:18:50.329181    5666 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:18:50.337769    5666 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:18:50.353458    5666 start.go:159] libmachine.API.Create for "bridge-151000" (driver="qemu2")
	I0913 12:18:50.353479    5666 client.go:168] LocalClient.Create starting
	I0913 12:18:50.353538    5666 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:18:50.353567    5666 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:50.353579    5666 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:50.353620    5666 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:18:50.353645    5666 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:50.353651    5666 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:50.353980    5666 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:18:50.511434    5666 main.go:141] libmachine: Creating SSH key...
	I0913 12:18:50.560595    5666 main.go:141] libmachine: Creating Disk image...
	I0913 12:18:50.560601    5666 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:18:50.560815    5666 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/disk.qcow2
	I0913 12:18:50.569933    5666 main.go:141] libmachine: STDOUT: 
	I0913 12:18:50.569953    5666 main.go:141] libmachine: STDERR: 
	I0913 12:18:50.570010    5666 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/disk.qcow2 +20000M
	I0913 12:18:50.578131    5666 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:18:50.578144    5666 main.go:141] libmachine: STDERR: 
	I0913 12:18:50.578188    5666 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/disk.qcow2
	I0913 12:18:50.578194    5666 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:18:50.578205    5666 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:18:50.578231    5666 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:2f:22:81:ac:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/disk.qcow2
	I0913 12:18:50.580013    5666 main.go:141] libmachine: STDOUT: 
	I0913 12:18:50.580026    5666 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:18:50.580044    5666 client.go:171] duration metric: took 226.2545ms to LocalClient.Create
	I0913 12:18:52.584775    5666 start.go:128] duration metric: took 2.252709583s to createHost
	I0913 12:18:52.584901    5666 start.go:83] releasing machines lock for "bridge-151000", held for 2.252882292s
	W0913 12:18:52.584983    5666 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:18:52.595241    5666 out.go:177] * Deleting "bridge-151000" in qemu2 ...
	W0913 12:18:52.628620    5666 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:18:52.628656    5666 start.go:729] Will try again in 5 seconds ...
	I0913 12:18:57.635848    5666 start.go:360] acquireMachinesLock for bridge-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:18:57.636336    5666 start.go:364] duration metric: took 400.5µs to acquireMachinesLock for "bridge-151000"
	I0913 12:18:57.636403    5666 start.go:93] Provisioning new machine with config: &{Name:bridge-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:18:57.636648    5666 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:18:57.644428    5666 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:18:57.692099    5666 start.go:159] libmachine.API.Create for "bridge-151000" (driver="qemu2")
	I0913 12:18:57.692155    5666 client.go:168] LocalClient.Create starting
	I0913 12:18:57.692300    5666 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:18:57.692364    5666 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:57.692382    5666 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:57.692452    5666 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:18:57.692504    5666 main.go:141] libmachine: Decoding PEM data...
	I0913 12:18:57.692517    5666 main.go:141] libmachine: Parsing certificate...
	I0913 12:18:57.693025    5666 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:18:57.857817    5666 main.go:141] libmachine: Creating SSH key...
	I0913 12:18:57.974988    5666 main.go:141] libmachine: Creating Disk image...
	I0913 12:18:57.974995    5666 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:18:57.975208    5666 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/disk.qcow2
	I0913 12:18:57.984708    5666 main.go:141] libmachine: STDOUT: 
	I0913 12:18:57.984725    5666 main.go:141] libmachine: STDERR: 
	I0913 12:18:57.984781    5666 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/disk.qcow2 +20000M
	I0913 12:18:57.993187    5666 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:18:57.993205    5666 main.go:141] libmachine: STDERR: 
	I0913 12:18:57.993216    5666 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/disk.qcow2
	I0913 12:18:57.993219    5666 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:18:57.993227    5666 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:18:57.993266    5666 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d4:d0:80:a0:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/bridge-151000/disk.qcow2
	I0913 12:18:57.995026    5666 main.go:141] libmachine: STDOUT: 
	I0913 12:18:57.995040    5666 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:18:57.995052    5666 client.go:171] duration metric: took 302.637542ms to LocalClient.Create
	I0913 12:18:59.998795    5666 start.go:128] duration metric: took 2.360274292s to createHost
	I0913 12:18:59.998896    5666 start.go:83] releasing machines lock for "bridge-151000", held for 2.360699375s
	W0913 12:18:59.999257    5666 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:19:00.009942    5666 out.go:201] 
	W0913 12:19:00.013219    5666 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:19:00.013257    5666 out.go:270] * 
	* 
	W0913 12:19:00.016639    5666 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:19:00.024760    5666 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.794047042s)

                                                
                                                
-- stdout --
	* [kubenet-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-151000" primary control-plane node in "kubenet-151000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-151000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:19:02.266659    5776 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:19:02.266791    5776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:19:02.266795    5776 out.go:358] Setting ErrFile to fd 2...
	I0913 12:19:02.266797    5776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:19:02.266924    5776 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:19:02.268188    5776 out.go:352] Setting JSON to false
	I0913 12:19:02.285807    5776 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4705,"bootTime":1726250437,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:19:02.285910    5776 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:19:02.291360    5776 out.go:177] * [kubenet-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:19:02.299170    5776 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:19:02.299209    5776 notify.go:220] Checking for updates...
	I0913 12:19:02.305221    5776 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:19:02.308183    5776 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:19:02.311222    5776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:19:02.314267    5776 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:19:02.317229    5776 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:19:02.320518    5776 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:19:02.320581    5776 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:19:02.320624    5776 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:19:02.324141    5776 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:19:02.331194    5776 start.go:297] selected driver: qemu2
	I0913 12:19:02.331200    5776 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:19:02.331208    5776 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:19:02.333599    5776 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:19:02.336277    5776 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:19:02.339301    5776 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:19:02.339319    5776 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0913 12:19:02.339347    5776 start.go:340] cluster config:
	{Name:kubenet-151000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:19:02.342924    5776 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:19:02.350265    5776 out.go:177] * Starting "kubenet-151000" primary control-plane node in "kubenet-151000" cluster
	I0913 12:19:02.354203    5776 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:19:02.354216    5776 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:19:02.354229    5776 cache.go:56] Caching tarball of preloaded images
	I0913 12:19:02.354283    5776 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:19:02.354289    5776 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:19:02.354340    5776 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/kubenet-151000/config.json ...
	I0913 12:19:02.354351    5776 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/kubenet-151000/config.json: {Name:mkc78becb299d4d9ad15b6693afbc15925037935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:19:02.354556    5776 start.go:360] acquireMachinesLock for kubenet-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:19:02.354588    5776 start.go:364] duration metric: took 26.333µs to acquireMachinesLock for "kubenet-151000"
	I0913 12:19:02.354598    5776 start.go:93] Provisioning new machine with config: &{Name:kubenet-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:19:02.354629    5776 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:19:02.362217    5776 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:19:02.378600    5776 start.go:159] libmachine.API.Create for "kubenet-151000" (driver="qemu2")
	I0913 12:19:02.378631    5776 client.go:168] LocalClient.Create starting
	I0913 12:19:02.378693    5776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:19:02.378721    5776 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:02.378731    5776 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:02.378766    5776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:19:02.378789    5776 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:02.378800    5776 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:02.379224    5776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:19:02.536921    5776 main.go:141] libmachine: Creating SSH key...
	I0913 12:19:02.582817    5776 main.go:141] libmachine: Creating Disk image...
	I0913 12:19:02.582828    5776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:19:02.583087    5776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/disk.qcow2
	I0913 12:19:02.593072    5776 main.go:141] libmachine: STDOUT: 
	I0913 12:19:02.593090    5776 main.go:141] libmachine: STDERR: 
	I0913 12:19:02.593138    5776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/disk.qcow2 +20000M
	I0913 12:19:02.601535    5776 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:19:02.601554    5776 main.go:141] libmachine: STDERR: 
	I0913 12:19:02.601569    5776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/disk.qcow2
	I0913 12:19:02.601574    5776 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:19:02.601588    5776 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:19:02.601614    5776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:88:c1:ec:a6:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/disk.qcow2
	I0913 12:19:02.603428    5776 main.go:141] libmachine: STDOUT: 
	I0913 12:19:02.603445    5776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:19:02.603467    5776 client.go:171] duration metric: took 224.692333ms to LocalClient.Create
	I0913 12:19:04.606797    5776 start.go:128] duration metric: took 2.250862959s to createHost
	I0913 12:19:04.606885    5776 start.go:83] releasing machines lock for "kubenet-151000", held for 2.251010125s
	W0913 12:19:04.606935    5776 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:19:04.613451    5776 out.go:177] * Deleting "kubenet-151000" in qemu2 ...
	W0913 12:19:04.642715    5776 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:19:04.642739    5776 start.go:729] Will try again in 5 seconds ...
	I0913 12:19:09.647220    5776 start.go:360] acquireMachinesLock for kubenet-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:19:09.647826    5776 start.go:364] duration metric: took 474.292µs to acquireMachinesLock for "kubenet-151000"
	I0913 12:19:09.647982    5776 start.go:93] Provisioning new machine with config: &{Name:kubenet-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:19:09.648264    5776 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:19:09.653939    5776 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:19:09.706294    5776 start.go:159] libmachine.API.Create for "kubenet-151000" (driver="qemu2")
	I0913 12:19:09.706357    5776 client.go:168] LocalClient.Create starting
	I0913 12:19:09.706481    5776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:19:09.706554    5776 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:09.706568    5776 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:09.706638    5776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:19:09.706685    5776 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:09.706697    5776 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:09.707250    5776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:19:09.875840    5776 main.go:141] libmachine: Creating SSH key...
	I0913 12:19:09.970050    5776 main.go:141] libmachine: Creating Disk image...
	I0913 12:19:09.970057    5776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:19:09.970256    5776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/disk.qcow2
	I0913 12:19:09.979441    5776 main.go:141] libmachine: STDOUT: 
	I0913 12:19:09.979463    5776 main.go:141] libmachine: STDERR: 
	I0913 12:19:09.979519    5776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/disk.qcow2 +20000M
	I0913 12:19:09.987934    5776 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:19:09.987950    5776 main.go:141] libmachine: STDERR: 
	I0913 12:19:09.987961    5776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/disk.qcow2
	I0913 12:19:09.987966    5776 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:19:09.987975    5776 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:19:09.988006    5776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:bf:ba:f4:39:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/kubenet-151000/disk.qcow2
	I0913 12:19:09.989685    5776 main.go:141] libmachine: STDOUT: 
	I0913 12:19:09.989699    5776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:19:09.989713    5776 client.go:171] duration metric: took 283.246541ms to LocalClient.Create
	I0913 12:19:11.992625    5776 start.go:128] duration metric: took 2.343463792s to createHost
	I0913 12:19:11.992696    5776 start.go:83] releasing machines lock for "kubenet-151000", held for 2.3440375s
	W0913 12:19:11.993087    5776 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:19:12.006903    5776 out.go:201] 
	W0913 12:19:12.011075    5776 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:19:12.011107    5776 out.go:270] * 
	* 
	W0913 12:19:12.013623    5776 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:19:12.022947    5776 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.89138675s)

                                                
                                                
-- stdout --
	* [custom-flannel-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-151000" primary control-plane node in "custom-flannel-151000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-151000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:19:14.243822    5885 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:19:14.243945    5885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:19:14.243949    5885 out.go:358] Setting ErrFile to fd 2...
	I0913 12:19:14.243951    5885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:19:14.244080    5885 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:19:14.245164    5885 out.go:352] Setting JSON to false
	I0913 12:19:14.261327    5885 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4717,"bootTime":1726250437,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:19:14.261387    5885 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:19:14.267839    5885 out.go:177] * [custom-flannel-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:19:14.275705    5885 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:19:14.275810    5885 notify.go:220] Checking for updates...
	I0913 12:19:14.282692    5885 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:19:14.285693    5885 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:19:14.288703    5885 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:19:14.291705    5885 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:19:14.294680    5885 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:19:14.298035    5885 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:19:14.298096    5885 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:19:14.298146    5885 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:19:14.302662    5885 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:19:14.309754    5885 start.go:297] selected driver: qemu2
	I0913 12:19:14.309762    5885 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:19:14.309768    5885 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:19:14.311889    5885 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:19:14.314706    5885 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:19:14.317865    5885 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:19:14.317882    5885 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0913 12:19:14.317892    5885 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0913 12:19:14.317925    5885 start.go:340] cluster config:
	{Name:custom-flannel-151000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:19:14.321277    5885 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:19:14.328706    5885 out.go:177] * Starting "custom-flannel-151000" primary control-plane node in "custom-flannel-151000" cluster
	I0913 12:19:14.332707    5885 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:19:14.332719    5885 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:19:14.332729    5885 cache.go:56] Caching tarball of preloaded images
	I0913 12:19:14.332776    5885 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:19:14.332781    5885 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:19:14.332828    5885 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/custom-flannel-151000/config.json ...
	I0913 12:19:14.332845    5885 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/custom-flannel-151000/config.json: {Name:mkbe778ad7fcc64f3072bb0657a456c28e6857ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:19:14.333047    5885 start.go:360] acquireMachinesLock for custom-flannel-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:19:14.333080    5885 start.go:364] duration metric: took 24.542µs to acquireMachinesLock for "custom-flannel-151000"
	I0913 12:19:14.333091    5885 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:19:14.333114    5885 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:19:14.341722    5885 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:19:14.357239    5885 start.go:159] libmachine.API.Create for "custom-flannel-151000" (driver="qemu2")
	I0913 12:19:14.357269    5885 client.go:168] LocalClient.Create starting
	I0913 12:19:14.357356    5885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:19:14.357387    5885 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:14.357402    5885 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:14.357454    5885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:19:14.357477    5885 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:14.357484    5885 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:14.357848    5885 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:19:14.514565    5885 main.go:141] libmachine: Creating SSH key...
	I0913 12:19:14.659866    5885 main.go:141] libmachine: Creating Disk image...
	I0913 12:19:14.659873    5885 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:19:14.660095    5885 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/disk.qcow2
	I0913 12:19:14.670550    5885 main.go:141] libmachine: STDOUT: 
	I0913 12:19:14.670583    5885 main.go:141] libmachine: STDERR: 
	I0913 12:19:14.670672    5885 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/disk.qcow2 +20000M
	I0913 12:19:14.679478    5885 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:19:14.679493    5885 main.go:141] libmachine: STDERR: 
	I0913 12:19:14.679507    5885 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/disk.qcow2
	I0913 12:19:14.679511    5885 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:19:14.679529    5885 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:19:14.679555    5885 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:5f:db:bd:fd:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/disk.qcow2
	I0913 12:19:14.681303    5885 main.go:141] libmachine: STDOUT: 
	I0913 12:19:14.681314    5885 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:19:14.681342    5885 client.go:171] duration metric: took 323.981458ms to LocalClient.Create
	I0913 12:19:16.684015    5885 start.go:128] duration metric: took 2.350300625s to createHost
	I0913 12:19:16.684096    5885 start.go:83] releasing machines lock for "custom-flannel-151000", held for 2.350429458s
	W0913 12:19:16.684132    5885 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:19:16.702044    5885 out.go:177] * Deleting "custom-flannel-151000" in qemu2 ...
	W0913 12:19:16.727592    5885 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:19:16.727617    5885 start.go:729] Will try again in 5 seconds ...
	I0913 12:19:21.730683    5885 start.go:360] acquireMachinesLock for custom-flannel-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:19:21.730993    5885 start.go:364] duration metric: took 241.125µs to acquireMachinesLock for "custom-flannel-151000"
	I0913 12:19:21.731105    5885 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:19:21.731212    5885 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:19:21.740926    5885 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:19:21.774117    5885 start.go:159] libmachine.API.Create for "custom-flannel-151000" (driver="qemu2")
	I0913 12:19:21.774160    5885 client.go:168] LocalClient.Create starting
	I0913 12:19:21.774262    5885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:19:21.774307    5885 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:21.774318    5885 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:21.774366    5885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:19:21.774399    5885 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:21.774407    5885 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:21.774848    5885 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:19:21.938506    5885 main.go:141] libmachine: Creating SSH key...
	I0913 12:19:22.041198    5885 main.go:141] libmachine: Creating Disk image...
	I0913 12:19:22.041206    5885 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:19:22.041484    5885 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/disk.qcow2
	I0913 12:19:22.050653    5885 main.go:141] libmachine: STDOUT: 
	I0913 12:19:22.050679    5885 main.go:141] libmachine: STDERR: 
	I0913 12:19:22.050736    5885 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/disk.qcow2 +20000M
	I0913 12:19:22.058610    5885 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:19:22.058635    5885 main.go:141] libmachine: STDERR: 
	I0913 12:19:22.058651    5885 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/disk.qcow2
	I0913 12:19:22.058655    5885 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:19:22.058663    5885 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:19:22.058691    5885 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:d1:bd:96:b3:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/custom-flannel-151000/disk.qcow2
	I0913 12:19:22.060312    5885 main.go:141] libmachine: STDOUT: 
	I0913 12:19:22.060334    5885 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:19:22.060347    5885 client.go:171] duration metric: took 286.139416ms to LocalClient.Create
	I0913 12:19:24.062829    5885 start.go:128] duration metric: took 2.331251792s to createHost
	I0913 12:19:24.062934    5885 start.go:83] releasing machines lock for "custom-flannel-151000", held for 2.331582875s
	W0913 12:19:24.063356    5885 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:19:24.078059    5885 out.go:201] 
	W0913 12:19:24.081999    5885 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:19:24.082026    5885 out.go:270] * 
	* 
	W0913 12:19:24.084896    5885 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:19:24.095010    5885 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.776404459s)

                                                
                                                
-- stdout --
	* [calico-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-151000" primary control-plane node in "calico-151000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-151000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:19:26.524250    6004 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:19:26.524396    6004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:19:26.524400    6004 out.go:358] Setting ErrFile to fd 2...
	I0913 12:19:26.524402    6004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:19:26.524547    6004 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:19:26.525622    6004 out.go:352] Setting JSON to false
	I0913 12:19:26.541659    6004 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4729,"bootTime":1726250437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:19:26.541733    6004 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:19:26.547118    6004 out.go:177] * [calico-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:19:26.554097    6004 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:19:26.554141    6004 notify.go:220] Checking for updates...
	I0913 12:19:26.561993    6004 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:19:26.565083    6004 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:19:26.567952    6004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:19:26.570991    6004 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:19:26.574055    6004 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:19:26.577405    6004 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:19:26.577472    6004 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:19:26.577523    6004 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:19:26.584955    6004 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:19:26.591947    6004 start.go:297] selected driver: qemu2
	I0913 12:19:26.591955    6004 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:19:26.591964    6004 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:19:26.594267    6004 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:19:26.596973    6004 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:19:26.600076    6004 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:19:26.600092    6004 cni.go:84] Creating CNI manager for "calico"
	I0913 12:19:26.600105    6004 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0913 12:19:26.600135    6004 start.go:340] cluster config:
	{Name:calico-151000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:19:26.603852    6004 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:19:26.610973    6004 out.go:177] * Starting "calico-151000" primary control-plane node in "calico-151000" cluster
	I0913 12:19:26.614799    6004 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:19:26.614812    6004 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:19:26.614824    6004 cache.go:56] Caching tarball of preloaded images
	I0913 12:19:26.614910    6004 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:19:26.614919    6004 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:19:26.614968    6004 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/calico-151000/config.json ...
	I0913 12:19:26.614979    6004 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/calico-151000/config.json: {Name:mk250283135bd89af106154e9df0ec9497c5880f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:19:26.615203    6004 start.go:360] acquireMachinesLock for calico-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:19:26.615241    6004 start.go:364] duration metric: took 32.459µs to acquireMachinesLock for "calico-151000"
	I0913 12:19:26.615251    6004 start.go:93] Provisioning new machine with config: &{Name:calico-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:19:26.615297    6004 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:19:26.623851    6004 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:19:26.641017    6004 start.go:159] libmachine.API.Create for "calico-151000" (driver="qemu2")
	I0913 12:19:26.641043    6004 client.go:168] LocalClient.Create starting
	I0913 12:19:26.641124    6004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:19:26.641158    6004 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:26.641168    6004 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:26.641208    6004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:19:26.641231    6004 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:26.641239    6004 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:26.641585    6004 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:19:26.798724    6004 main.go:141] libmachine: Creating SSH key...
	I0913 12:19:26.857755    6004 main.go:141] libmachine: Creating Disk image...
	I0913 12:19:26.857761    6004 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:19:26.857975    6004 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/disk.qcow2
	I0913 12:19:26.867721    6004 main.go:141] libmachine: STDOUT: 
	I0913 12:19:26.867740    6004 main.go:141] libmachine: STDERR: 
	I0913 12:19:26.867789    6004 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/disk.qcow2 +20000M
	I0913 12:19:26.875698    6004 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:19:26.875713    6004 main.go:141] libmachine: STDERR: 
	I0913 12:19:26.875733    6004 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/disk.qcow2
	I0913 12:19:26.875738    6004 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:19:26.875750    6004 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:19:26.875779    6004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:f8:e9:9b:1a:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/disk.qcow2
	I0913 12:19:26.877380    6004 main.go:141] libmachine: STDOUT: 
	I0913 12:19:26.877394    6004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:19:26.877415    6004 client.go:171] duration metric: took 236.341125ms to LocalClient.Create
	I0913 12:19:28.879922    6004 start.go:128] duration metric: took 2.264367625s to createHost
	I0913 12:19:28.880024    6004 start.go:83] releasing machines lock for "calico-151000", held for 2.264555416s
	W0913 12:19:28.880082    6004 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:19:28.887556    6004 out.go:177] * Deleting "calico-151000" in qemu2 ...
	W0913 12:19:28.925608    6004 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:19:28.925640    6004 start.go:729] Will try again in 5 seconds ...
	I0913 12:19:33.926953    6004 start.go:360] acquireMachinesLock for calico-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:19:33.927619    6004 start.go:364] duration metric: took 560.917µs to acquireMachinesLock for "calico-151000"
	I0913 12:19:33.927775    6004 start.go:93] Provisioning new machine with config: &{Name:calico-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:19:33.928097    6004 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:19:33.934816    6004 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:19:33.987594    6004 start.go:159] libmachine.API.Create for "calico-151000" (driver="qemu2")
	I0913 12:19:33.987656    6004 client.go:168] LocalClient.Create starting
	I0913 12:19:33.987787    6004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:19:33.987862    6004 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:33.987885    6004 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:33.987953    6004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:19:33.988001    6004 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:33.988017    6004 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:33.988574    6004 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:19:34.155428    6004 main.go:141] libmachine: Creating SSH key...
	I0913 12:19:34.202336    6004 main.go:141] libmachine: Creating Disk image...
	I0913 12:19:34.202345    6004 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:19:34.202589    6004 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/disk.qcow2
	I0913 12:19:34.212293    6004 main.go:141] libmachine: STDOUT: 
	I0913 12:19:34.212323    6004 main.go:141] libmachine: STDERR: 
	I0913 12:19:34.212380    6004 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/disk.qcow2 +20000M
	I0913 12:19:34.220922    6004 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:19:34.220940    6004 main.go:141] libmachine: STDERR: 
	I0913 12:19:34.220953    6004 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/disk.qcow2
	I0913 12:19:34.220958    6004 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:19:34.220969    6004 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:19:34.220998    6004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:17:7b:3b:c8:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/calico-151000/disk.qcow2
	I0913 12:19:34.222933    6004 main.go:141] libmachine: STDOUT: 
	I0913 12:19:34.222949    6004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:19:34.222961    6004 client.go:171] duration metric: took 235.287667ms to LocalClient.Create
	I0913 12:19:36.225167    6004 start.go:128] duration metric: took 2.29693475s to createHost
	I0913 12:19:36.225241    6004 start.go:83] releasing machines lock for "calico-151000", held for 2.297452375s
	W0913 12:19:36.225411    6004 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:19:36.240880    6004 out.go:201] 
	W0913 12:19:36.244701    6004 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:19:36.244714    6004 out.go:270] * 
	* 
	W0913 12:19:36.245670    6004 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:19:36.262800    6004 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-151000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.883656875s)

                                                
                                                
-- stdout --
	* [false-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-151000" primary control-plane node in "false-151000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-151000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:19:38.641139    6121 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:19:38.641271    6121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:19:38.641274    6121 out.go:358] Setting ErrFile to fd 2...
	I0913 12:19:38.641277    6121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:19:38.641402    6121 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:19:38.642447    6121 out.go:352] Setting JSON to false
	I0913 12:19:38.659050    6121 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4741,"bootTime":1726250437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:19:38.659132    6121 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:19:38.664587    6121 out.go:177] * [false-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:19:38.673399    6121 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:19:38.673465    6121 notify.go:220] Checking for updates...
	I0913 12:19:38.680432    6121 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:19:38.683440    6121 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:19:38.686343    6121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:19:38.689396    6121 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:19:38.692399    6121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:19:38.695661    6121 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:19:38.695729    6121 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:19:38.695781    6121 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:19:38.699374    6121 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:19:38.706392    6121 start.go:297] selected driver: qemu2
	I0913 12:19:38.706397    6121 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:19:38.706403    6121 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:19:38.708660    6121 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:19:38.712407    6121 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:19:38.715439    6121 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:19:38.715455    6121 cni.go:84] Creating CNI manager for "false"
	I0913 12:19:38.715489    6121 start.go:340] cluster config:
	{Name:false-151000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:19:38.718940    6121 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:19:38.726388    6121 out.go:177] * Starting "false-151000" primary control-plane node in "false-151000" cluster
	I0913 12:19:38.730388    6121 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:19:38.730404    6121 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:19:38.730417    6121 cache.go:56] Caching tarball of preloaded images
	I0913 12:19:38.730502    6121 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:19:38.730507    6121 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:19:38.730556    6121 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/false-151000/config.json ...
	I0913 12:19:38.730567    6121 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/false-151000/config.json: {Name:mk480c410752074839c526f4fca5d9e41a6c3ecf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:19:38.730773    6121 start.go:360] acquireMachinesLock for false-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:19:38.730803    6121 start.go:364] duration metric: took 24.208µs to acquireMachinesLock for "false-151000"
	I0913 12:19:38.730812    6121 start.go:93] Provisioning new machine with config: &{Name:false-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:19:38.730841    6121 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:19:38.739429    6121 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:19:38.754736    6121 start.go:159] libmachine.API.Create for "false-151000" (driver="qemu2")
	I0913 12:19:38.754761    6121 client.go:168] LocalClient.Create starting
	I0913 12:19:38.754847    6121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:19:38.754879    6121 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:38.754888    6121 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:38.754924    6121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:19:38.754951    6121 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:38.754959    6121 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:38.755304    6121 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:19:38.913247    6121 main.go:141] libmachine: Creating SSH key...
	I0913 12:19:39.000284    6121 main.go:141] libmachine: Creating Disk image...
	I0913 12:19:39.000293    6121 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:19:39.000514    6121 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/disk.qcow2
	I0913 12:19:39.010012    6121 main.go:141] libmachine: STDOUT: 
	I0913 12:19:39.010041    6121 main.go:141] libmachine: STDERR: 
	I0913 12:19:39.010099    6121 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/disk.qcow2 +20000M
	I0913 12:19:39.017960    6121 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:19:39.017977    6121 main.go:141] libmachine: STDERR: 
	I0913 12:19:39.017992    6121 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/disk.qcow2
	I0913 12:19:39.017995    6121 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:19:39.018008    6121 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:19:39.018035    6121 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:5c:85:ac:a2:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/disk.qcow2
	I0913 12:19:39.019666    6121 main.go:141] libmachine: STDOUT: 
	I0913 12:19:39.019680    6121 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:19:39.019708    6121 client.go:171] duration metric: took 264.933083ms to LocalClient.Create
	I0913 12:19:41.021973    6121 start.go:128] duration metric: took 2.291037s to createHost
	I0913 12:19:41.022067    6121 start.go:83] releasing machines lock for "false-151000", held for 2.291189667s
	W0913 12:19:41.022147    6121 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:19:41.037285    6121 out.go:177] * Deleting "false-151000" in qemu2 ...
	W0913 12:19:41.070866    6121 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:19:41.070905    6121 start.go:729] Will try again in 5 seconds ...
	I0913 12:19:46.073351    6121 start.go:360] acquireMachinesLock for false-151000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:19:46.073877    6121 start.go:364] duration metric: took 416.792µs to acquireMachinesLock for "false-151000"
	I0913 12:19:46.073956    6121 start.go:93] Provisioning new machine with config: &{Name:false-151000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-151000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:19:46.074315    6121 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:19:46.083016    6121 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 12:19:46.133039    6121 start.go:159] libmachine.API.Create for "false-151000" (driver="qemu2")
	I0913 12:19:46.133089    6121 client.go:168] LocalClient.Create starting
	I0913 12:19:46.133212    6121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:19:46.133301    6121 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:46.133317    6121 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:46.133395    6121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:19:46.133441    6121 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:46.133455    6121 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:46.133979    6121 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:19:46.300249    6121 main.go:141] libmachine: Creating SSH key...
	I0913 12:19:46.436390    6121 main.go:141] libmachine: Creating Disk image...
	I0913 12:19:46.436396    6121 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:19:46.436612    6121 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/disk.qcow2
	I0913 12:19:46.446270    6121 main.go:141] libmachine: STDOUT: 
	I0913 12:19:46.446299    6121 main.go:141] libmachine: STDERR: 
	I0913 12:19:46.446369    6121 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/disk.qcow2 +20000M
	I0913 12:19:46.454556    6121 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:19:46.454571    6121 main.go:141] libmachine: STDERR: 
	I0913 12:19:46.454584    6121 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/disk.qcow2
	I0913 12:19:46.454588    6121 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:19:46.454595    6121 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:19:46.454623    6121 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:26:0f:c4:fa:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/false-151000/disk.qcow2
	I0913 12:19:46.456237    6121 main.go:141] libmachine: STDOUT: 
	I0913 12:19:46.456252    6121 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:19:46.456264    6121 client.go:171] duration metric: took 323.165416ms to LocalClient.Create
	I0913 12:19:48.458462    6121 start.go:128] duration metric: took 2.384063125s to createHost
	I0913 12:19:48.458566    6121 start.go:83] releasing machines lock for "false-151000", held for 2.3846495s
	W0913 12:19:48.458824    6121 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:19:48.470368    6121 out.go:201] 
	W0913 12:19:48.473313    6121 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:19:48.473332    6121 out.go:270] * 
	* 
	W0913 12:19:48.474815    6121 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:19:48.484308    6121 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-556000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-556000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.875088208s)

                                                
                                                
-- stdout --
	* [old-k8s-version-556000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-556000" primary control-plane node in "old-k8s-version-556000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-556000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:19:50.911318    6237 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:19:50.911441    6237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:19:50.911444    6237 out.go:358] Setting ErrFile to fd 2...
	I0913 12:19:50.911447    6237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:19:50.911559    6237 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:19:50.912557    6237 out.go:352] Setting JSON to false
	I0913 12:19:50.929638    6237 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4753,"bootTime":1726250437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:19:50.929718    6237 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:19:50.935828    6237 out.go:177] * [old-k8s-version-556000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:19:50.944560    6237 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:19:50.944604    6237 notify.go:220] Checking for updates...
	I0913 12:19:50.960492    6237 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:19:50.963593    6237 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:19:50.966574    6237 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:19:50.969500    6237 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:19:50.972620    6237 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:19:50.975908    6237 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:19:50.975977    6237 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:19:50.976053    6237 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:19:50.979565    6237 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:19:50.986647    6237 start.go:297] selected driver: qemu2
	I0913 12:19:50.986654    6237 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:19:50.986662    6237 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:19:50.989023    6237 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:19:50.991607    6237 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:19:50.994616    6237 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:19:50.994632    6237 cni.go:84] Creating CNI manager for ""
	I0913 12:19:50.994651    6237 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0913 12:19:50.994672    6237 start.go:340] cluster config:
	{Name:old-k8s-version-556000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:19:50.998322    6237 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:19:51.005502    6237 out.go:177] * Starting "old-k8s-version-556000" primary control-plane node in "old-k8s-version-556000" cluster
	I0913 12:19:51.009614    6237 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 12:19:51.009626    6237 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 12:19:51.009638    6237 cache.go:56] Caching tarball of preloaded images
	I0913 12:19:51.009693    6237 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:19:51.009698    6237 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0913 12:19:51.009763    6237 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/old-k8s-version-556000/config.json ...
	I0913 12:19:51.009773    6237 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/old-k8s-version-556000/config.json: {Name:mk71ee35b1b6a951bd897fdaad90db54ac2e1eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:19:51.009975    6237 start.go:360] acquireMachinesLock for old-k8s-version-556000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:19:51.010009    6237 start.go:364] duration metric: took 26.5µs to acquireMachinesLock for "old-k8s-version-556000"
	I0913 12:19:51.010019    6237 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-556000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:19:51.010049    6237 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:19:51.017531    6237 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:19:51.033569    6237 start.go:159] libmachine.API.Create for "old-k8s-version-556000" (driver="qemu2")
	I0913 12:19:51.033601    6237 client.go:168] LocalClient.Create starting
	I0913 12:19:51.033665    6237 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:19:51.033697    6237 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:51.033706    6237 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:51.033742    6237 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:19:51.033765    6237 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:51.033773    6237 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:51.034121    6237 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:19:51.191164    6237 main.go:141] libmachine: Creating SSH key...
	I0913 12:19:51.240705    6237 main.go:141] libmachine: Creating Disk image...
	I0913 12:19:51.240712    6237 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:19:51.240914    6237 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/disk.qcow2
	I0913 12:19:51.249934    6237 main.go:141] libmachine: STDOUT: 
	I0913 12:19:51.249948    6237 main.go:141] libmachine: STDERR: 
	I0913 12:19:51.250009    6237 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/disk.qcow2 +20000M
	I0913 12:19:51.257754    6237 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:19:51.257771    6237 main.go:141] libmachine: STDERR: 
	I0913 12:19:51.257784    6237 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/disk.qcow2
	I0913 12:19:51.257789    6237 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:19:51.257798    6237 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:19:51.257827    6237 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:e1:5f:2d:ce:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/disk.qcow2
	I0913 12:19:51.259443    6237 main.go:141] libmachine: STDOUT: 
	I0913 12:19:51.259458    6237 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:19:51.259478    6237 client.go:171] duration metric: took 225.870334ms to LocalClient.Create
	I0913 12:19:53.261665    6237 start.go:128] duration metric: took 2.251586875s to createHost
	I0913 12:19:53.261765    6237 start.go:83] releasing machines lock for "old-k8s-version-556000", held for 2.251755083s
	W0913 12:19:53.261814    6237 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:19:53.267632    6237 out.go:177] * Deleting "old-k8s-version-556000" in qemu2 ...
	W0913 12:19:53.300680    6237 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:19:53.300711    6237 start.go:729] Will try again in 5 seconds ...
	I0913 12:19:58.302806    6237 start.go:360] acquireMachinesLock for old-k8s-version-556000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:19:58.303051    6237 start.go:364] duration metric: took 201.083µs to acquireMachinesLock for "old-k8s-version-556000"
	I0913 12:19:58.303115    6237 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-556000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:19:58.303229    6237 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:19:58.312527    6237 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:19:58.343287    6237 start.go:159] libmachine.API.Create for "old-k8s-version-556000" (driver="qemu2")
	I0913 12:19:58.343325    6237 client.go:168] LocalClient.Create starting
	I0913 12:19:58.343427    6237 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:19:58.343481    6237 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:58.343495    6237 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:58.343545    6237 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:19:58.343576    6237 main.go:141] libmachine: Decoding PEM data...
	I0913 12:19:58.343594    6237 main.go:141] libmachine: Parsing certificate...
	I0913 12:19:58.343967    6237 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:19:58.504637    6237 main.go:141] libmachine: Creating SSH key...
	I0913 12:19:58.692870    6237 main.go:141] libmachine: Creating Disk image...
	I0913 12:19:58.692885    6237 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:19:58.693151    6237 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/disk.qcow2
	I0913 12:19:58.703420    6237 main.go:141] libmachine: STDOUT: 
	I0913 12:19:58.703442    6237 main.go:141] libmachine: STDERR: 
	I0913 12:19:58.703530    6237 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/disk.qcow2 +20000M
	I0913 12:19:58.711887    6237 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:19:58.711905    6237 main.go:141] libmachine: STDERR: 
	I0913 12:19:58.711919    6237 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/disk.qcow2
	I0913 12:19:58.711926    6237 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:19:58.711935    6237 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:19:58.711974    6237 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:f1:1e:33:d2:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/disk.qcow2
	I0913 12:19:58.713630    6237 main.go:141] libmachine: STDOUT: 
	I0913 12:19:58.713646    6237 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:19:58.713664    6237 client.go:171] duration metric: took 370.337459ms to LocalClient.Create
	I0913 12:20:00.715854    6237 start.go:128] duration metric: took 2.41261425s to createHost
	I0913 12:20:00.715982    6237 start.go:83] releasing machines lock for "old-k8s-version-556000", held for 2.41294675s
	W0913 12:20:00.716405    6237 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-556000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-556000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:00.731144    6237 out.go:201] 
	W0913 12:20:00.734301    6237 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:20:00.734317    6237 out.go:270] * 
	* 
	W0913 12:20:00.735766    6237 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:20:00.745102    6237 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-556000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000: exit status 7 (54.862875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-556000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-556000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-556000 create -f testdata/busybox.yaml: exit status 1 (29.184542ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-556000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-556000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000: exit status 7 (29.611958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-556000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000: exit status 7 (30.105791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-556000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-556000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-556000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-556000 describe deploy/metrics-server -n kube-system: exit status 1 (27.112667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-556000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-556000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000: exit status 7 (30.004917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-556000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-556000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-556000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.199948167s)

                                                
                                                
-- stdout --
	* [old-k8s-version-556000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-556000" primary control-plane node in "old-k8s-version-556000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-556000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-556000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:20:04.364351    6285 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:20:04.364490    6285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:04.364494    6285 out.go:358] Setting ErrFile to fd 2...
	I0913 12:20:04.364496    6285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:04.364646    6285 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:20:04.365789    6285 out.go:352] Setting JSON to false
	I0913 12:20:04.383829    6285 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4767,"bootTime":1726250437,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:20:04.383902    6285 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:20:04.389774    6285 out.go:177] * [old-k8s-version-556000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:20:04.398823    6285 notify.go:220] Checking for updates...
	I0913 12:20:04.402779    6285 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:20:04.406763    6285 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:20:04.410780    6285 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:20:04.413791    6285 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:20:04.416795    6285 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:20:04.419719    6285 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:20:04.423042    6285 config.go:182] Loaded profile config "old-k8s-version-556000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0913 12:20:04.425629    6285 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 12:20:04.429762    6285 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:20:04.433741    6285 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 12:20:04.440806    6285 start.go:297] selected driver: qemu2
	I0913 12:20:04.440813    6285 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-556000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:20:04.440883    6285 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:20:04.443501    6285 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:20:04.443525    6285 cni.go:84] Creating CNI manager for ""
	I0913 12:20:04.443549    6285 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0913 12:20:04.443570    6285 start.go:340] cluster config:
	{Name:old-k8s-version-556000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:20:04.447714    6285 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:04.454750    6285 out.go:177] * Starting "old-k8s-version-556000" primary control-plane node in "old-k8s-version-556000" cluster
	I0913 12:20:04.458823    6285 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 12:20:04.458851    6285 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 12:20:04.458865    6285 cache.go:56] Caching tarball of preloaded images
	I0913 12:20:04.458939    6285 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:20:04.458945    6285 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0913 12:20:04.459022    6285 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/old-k8s-version-556000/config.json ...
	I0913 12:20:04.459399    6285 start.go:360] acquireMachinesLock for old-k8s-version-556000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:04.459438    6285 start.go:364] duration metric: took 32.833µs to acquireMachinesLock for "old-k8s-version-556000"
	I0913 12:20:04.459447    6285 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:20:04.459452    6285 fix.go:54] fixHost starting: 
	I0913 12:20:04.459564    6285 fix.go:112] recreateIfNeeded on old-k8s-version-556000: state=Stopped err=<nil>
	W0913 12:20:04.459572    6285 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:20:04.463848    6285 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-556000" ...
	I0913 12:20:04.471723    6285 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:04.471766    6285 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:f1:1e:33:d2:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/disk.qcow2
	I0913 12:20:04.473792    6285 main.go:141] libmachine: STDOUT: 
	I0913 12:20:04.473808    6285 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:04.473844    6285 fix.go:56] duration metric: took 14.390292ms for fixHost
	I0913 12:20:04.473849    6285 start.go:83] releasing machines lock for "old-k8s-version-556000", held for 14.406083ms
	W0913 12:20:04.473854    6285 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:20:04.473890    6285 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:04.473894    6285 start.go:729] Will try again in 5 seconds ...
	I0913 12:20:09.476025    6285 start.go:360] acquireMachinesLock for old-k8s-version-556000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:09.476671    6285 start.go:364] duration metric: took 456.166µs to acquireMachinesLock for "old-k8s-version-556000"
	I0913 12:20:09.476877    6285 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:20:09.476899    6285 fix.go:54] fixHost starting: 
	I0913 12:20:09.477665    6285 fix.go:112] recreateIfNeeded on old-k8s-version-556000: state=Stopped err=<nil>
	W0913 12:20:09.477687    6285 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:20:09.485970    6285 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-556000" ...
	I0913 12:20:09.490032    6285 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:09.490252    6285 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:f1:1e:33:d2:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/old-k8s-version-556000/disk.qcow2
	I0913 12:20:09.498863    6285 main.go:141] libmachine: STDOUT: 
	I0913 12:20:09.498922    6285 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:09.499000    6285 fix.go:56] duration metric: took 22.102875ms for fixHost
	I0913 12:20:09.499016    6285 start.go:83] releasing machines lock for "old-k8s-version-556000", held for 22.288958ms
	W0913 12:20:09.499182    6285 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-556000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-556000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:09.508055    6285 out.go:201] 
	W0913 12:20:09.512132    6285 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:20:09.512153    6285 out.go:270] * 
	* 
	W0913 12:20:09.513754    6285 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:20:09.522132    6285 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-556000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000: exit status 7 (60.855791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-556000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-556000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000: exit status 7 (31.949834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-556000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-556000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-556000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-556000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.903292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-556000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-556000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000: exit status 7 (30.095416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-556000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-556000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000: exit status 7 (28.898167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-556000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-556000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-556000 --alsologtostderr -v=1: exit status 83 (41.042583ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-556000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-556000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:20:09.786814    6308 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:20:09.787804    6308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:09.787809    6308 out.go:358] Setting ErrFile to fd 2...
	I0913 12:20:09.787812    6308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:09.787991    6308 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:20:09.788194    6308 out.go:352] Setting JSON to false
	I0913 12:20:09.788199    6308 mustload.go:65] Loading cluster: old-k8s-version-556000
	I0913 12:20:09.788424    6308 config.go:182] Loaded profile config "old-k8s-version-556000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0913 12:20:09.792070    6308 out.go:177] * The control-plane node old-k8s-version-556000 host is not running: state=Stopped
	I0913 12:20:09.795087    6308 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-556000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-556000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000: exit status 7 (29.211459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-556000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000: exit status 7 (29.752167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-556000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-560000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-560000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.761141708s)

                                                
                                                
-- stdout --
	* [no-preload-560000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-560000" primary control-plane node in "no-preload-560000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-560000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:20:10.111386    6325 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:20:10.111524    6325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:10.111535    6325 out.go:358] Setting ErrFile to fd 2...
	I0913 12:20:10.111538    6325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:10.111662    6325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:20:10.112749    6325 out.go:352] Setting JSON to false
	I0913 12:20:10.128946    6325 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4773,"bootTime":1726250437,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:20:10.129024    6325 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:20:10.133544    6325 out.go:177] * [no-preload-560000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:20:10.141703    6325 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:20:10.141770    6325 notify.go:220] Checking for updates...
	I0913 12:20:10.148631    6325 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:20:10.151649    6325 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:20:10.154603    6325 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:20:10.157680    6325 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:20:10.160671    6325 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:20:10.163988    6325 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:20:10.164051    6325 config.go:182] Loaded profile config "stopped-upgrade-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 12:20:10.164098    6325 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:20:10.168628    6325 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:20:10.175518    6325 start.go:297] selected driver: qemu2
	I0913 12:20:10.175525    6325 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:20:10.175534    6325 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:20:10.177948    6325 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:20:10.180607    6325 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:20:10.183673    6325 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:20:10.183689    6325 cni.go:84] Creating CNI manager for ""
	I0913 12:20:10.183710    6325 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:20:10.183718    6325 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 12:20:10.183751    6325 start.go:340] cluster config:
	{Name:no-preload-560000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-560000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:20:10.187521    6325 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:10.195595    6325 out.go:177] * Starting "no-preload-560000" primary control-plane node in "no-preload-560000" cluster
	I0913 12:20:10.198557    6325 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:20:10.198629    6325 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/no-preload-560000/config.json ...
	I0913 12:20:10.198643    6325 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/no-preload-560000/config.json: {Name:mk16ac9b5d6a7a42668c0e1f565420da50e3e50b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:20:10.198644    6325 cache.go:107] acquiring lock: {Name:mk17f6d43c7206131d95df7c16bbacbac9092ee8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:10.198650    6325 cache.go:107] acquiring lock: {Name:mkb65d2451c7b3c342573204d131c065bccf052e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:10.198714    6325 cache.go:115] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0913 12:20:10.198725    6325 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.417µs
	I0913 12:20:10.198732    6325 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0913 12:20:10.198739    6325 cache.go:107] acquiring lock: {Name:mka8993a0c372eec4a2d32bcc7a759a47775b8f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:10.198757    6325 cache.go:107] acquiring lock: {Name:mk254020bc5dfb914b8a262d8f871632b4d786b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:10.198786    6325 cache.go:107] acquiring lock: {Name:mkc71b299271afec674acc09f8716786baae05ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:10.198843    6325 cache.go:107] acquiring lock: {Name:mk14f647378e260c8321efa55709f8dcef94939d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:10.198853    6325 cache.go:107] acquiring lock: {Name:mk2e9ad4c81128607e51f4969e9cb5de8fcd0f45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:10.198894    6325 cache.go:107] acquiring lock: {Name:mk235845be8d80e59ad58b1b3376d4ce5678b3b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:10.198937    6325 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0913 12:20:10.198977    6325 start.go:360] acquireMachinesLock for no-preload-560000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:10.198993    6325 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 12:20:10.199048    6325 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 12:20:10.199075    6325 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 12:20:10.199087    6325 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 12:20:10.199091    6325 start.go:364] duration metric: took 98.209µs to acquireMachinesLock for "no-preload-560000"
	I0913 12:20:10.199165    6325 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0913 12:20:10.199175    6325 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 12:20:10.199124    6325 start.go:93] Provisioning new machine with config: &{Name:no-preload-560000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-560000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:20:10.199216    6325 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:20:10.207558    6325 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:20:10.210504    6325 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 12:20:10.210506    6325 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0913 12:20:10.210724    6325 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 12:20:10.211077    6325 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 12:20:10.213006    6325 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 12:20:10.213053    6325 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 12:20:10.213102    6325 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0913 12:20:10.223626    6325 start.go:159] libmachine.API.Create for "no-preload-560000" (driver="qemu2")
	I0913 12:20:10.223645    6325 client.go:168] LocalClient.Create starting
	I0913 12:20:10.223711    6325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:20:10.223741    6325 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:10.223752    6325 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:10.223787    6325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:20:10.223810    6325 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:10.223818    6325 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:10.224157    6325 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:20:10.384595    6325 main.go:141] libmachine: Creating SSH key...
	I0913 12:20:10.443181    6325 main.go:141] libmachine: Creating Disk image...
	I0913 12:20:10.443200    6325 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:20:10.443407    6325 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/disk.qcow2
	I0913 12:20:10.452591    6325 main.go:141] libmachine: STDOUT: 
	I0913 12:20:10.452613    6325 main.go:141] libmachine: STDERR: 
	I0913 12:20:10.452666    6325 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/disk.qcow2 +20000M
	I0913 12:20:10.462132    6325 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:20:10.462177    6325 main.go:141] libmachine: STDERR: 
	I0913 12:20:10.462196    6325 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/disk.qcow2
	I0913 12:20:10.462200    6325 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:20:10.462216    6325 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:10.462261    6325 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:2e:66:f7:e2:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/disk.qcow2
	I0913 12:20:10.464103    6325 main.go:141] libmachine: STDOUT: 
	I0913 12:20:10.464117    6325 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:10.464138    6325 client.go:171] duration metric: took 240.493291ms to LocalClient.Create
	I0913 12:20:10.655790    6325 cache.go:162] opening:  /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0913 12:20:10.666578    6325 cache.go:162] opening:  /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0913 12:20:10.683060    6325 cache.go:162] opening:  /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0913 12:20:10.688624    6325 cache.go:162] opening:  /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0913 12:20:10.700431    6325 cache.go:162] opening:  /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0913 12:20:10.701441    6325 cache.go:162] opening:  /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0913 12:20:10.731291    6325 cache.go:162] opening:  /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0913 12:20:10.884915    6325 cache.go:157] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0913 12:20:10.884944    6325 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 686.220083ms
	I0913 12:20:10.884959    6325 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0913 12:20:12.464279    6325 start.go:128] duration metric: took 2.265084208s to createHost
	I0913 12:20:12.464327    6325 start.go:83] releasing machines lock for "no-preload-560000", held for 2.265271333s
	W0913 12:20:12.464365    6325 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:12.470836    6325 out.go:177] * Deleting "no-preload-560000" in qemu2 ...
	W0913 12:20:12.503224    6325 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:12.503240    6325 start.go:729] Will try again in 5 seconds ...
	I0913 12:20:14.113574    6325 cache.go:157] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0913 12:20:14.113602    6325 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.914817583s
	I0913 12:20:14.113613    6325 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0913 12:20:14.807319    6325 cache.go:157] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0913 12:20:14.807370    6325 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 4.608827959s
	I0913 12:20:14.807380    6325 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0913 12:20:15.179048    6325 cache.go:157] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0913 12:20:15.179082    6325 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.980439917s
	I0913 12:20:15.179096    6325 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0913 12:20:15.218822    6325 cache.go:157] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0913 12:20:15.218836    6325 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 5.020127209s
	I0913 12:20:15.218844    6325 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0913 12:20:15.283801    6325 cache.go:157] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0913 12:20:15.283816    6325 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 5.085134709s
	I0913 12:20:15.283824    6325 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0913 12:20:17.503729    6325 start.go:360] acquireMachinesLock for no-preload-560000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:17.503870    6325 start.go:364] duration metric: took 119.667µs to acquireMachinesLock for "no-preload-560000"
	I0913 12:20:17.503898    6325 start.go:93] Provisioning new machine with config: &{Name:no-preload-560000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-560000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:20:17.503929    6325 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:20:17.514123    6325 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:20:17.529936    6325 start.go:159] libmachine.API.Create for "no-preload-560000" (driver="qemu2")
	I0913 12:20:17.529967    6325 client.go:168] LocalClient.Create starting
	I0913 12:20:17.530038    6325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:20:17.530072    6325 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:17.530084    6325 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:17.530131    6325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:20:17.530155    6325 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:17.530165    6325 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:17.530467    6325 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:20:17.692845    6325 main.go:141] libmachine: Creating SSH key...
	I0913 12:20:17.776632    6325 main.go:141] libmachine: Creating Disk image...
	I0913 12:20:17.776642    6325 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:20:17.776882    6325 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/disk.qcow2
	I0913 12:20:17.786602    6325 main.go:141] libmachine: STDOUT: 
	I0913 12:20:17.786633    6325 main.go:141] libmachine: STDERR: 
	I0913 12:20:17.786693    6325 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/disk.qcow2 +20000M
	I0913 12:20:17.794842    6325 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:20:17.794858    6325 main.go:141] libmachine: STDERR: 
	I0913 12:20:17.794874    6325 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/disk.qcow2
	I0913 12:20:17.794879    6325 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:20:17.794891    6325 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:17.794933    6325 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:fc:ab:88:e9:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/disk.qcow2
	I0913 12:20:17.796740    6325 main.go:141] libmachine: STDOUT: 
	I0913 12:20:17.796830    6325 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:17.796844    6325 client.go:171] duration metric: took 266.879125ms to LocalClient.Create
	I0913 12:20:17.998890    6325 cache.go:157] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0913 12:20:17.998917    6325 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 7.800352125s
	I0913 12:20:17.998930    6325 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0913 12:20:17.998943    6325 cache.go:87] Successfully saved all images to host disk.
	I0913 12:20:19.799090    6325 start.go:128] duration metric: took 2.295188916s to createHost
	I0913 12:20:19.799159    6325 start.go:83] releasing machines lock for "no-preload-560000", held for 2.295334167s
	W0913 12:20:19.799579    6325 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-560000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-560000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:19.808356    6325 out.go:201] 
	W0913 12:20:19.818416    6325 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:20:19.818452    6325 out.go:270] * 
	* 
	W0913 12:20:19.821148    6325 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:20:19.830273    6325 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-560000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000: exit status 7 (65.077458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-560000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-560000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-560000 create -f testdata/busybox.yaml: exit status 1 (29.488834ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-560000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-560000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000: exit status 7 (29.900791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-560000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000: exit status 7 (30.395334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-560000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-560000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-560000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-560000 describe deploy/metrics-server -n kube-system: exit status 1 (26.717541ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-560000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-560000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000: exit status 7 (29.987083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-560000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-560000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-560000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.264108709s)

                                                
                                                
-- stdout --
	* [no-preload-560000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-560000" primary control-plane node in "no-preload-560000" cluster
	* Restarting existing qemu2 VM for "no-preload-560000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-560000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:20:22.933540    6408 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:20:22.933709    6408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:22.933713    6408 out.go:358] Setting ErrFile to fd 2...
	I0913 12:20:22.933715    6408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:22.933850    6408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:20:22.935144    6408 out.go:352] Setting JSON to false
	I0913 12:20:22.957785    6408 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4785,"bootTime":1726250437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:20:22.957873    6408 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:20:22.963883    6408 out.go:177] * [no-preload-560000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:20:22.971208    6408 notify.go:220] Checking for updates...
	I0913 12:20:22.986151    6408 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:20:22.994010    6408 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:20:23.003084    6408 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:20:23.011092    6408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:20:23.020084    6408 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:20:23.029086    6408 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:20:23.034340    6408 config.go:182] Loaded profile config "no-preload-560000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:20:23.034571    6408 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:20:23.043094    6408 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 12:20:23.047148    6408 start.go:297] selected driver: qemu2
	I0913 12:20:23.047152    6408 start.go:901] validating driver "qemu2" against &{Name:no-preload-560000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-560000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:20:23.047218    6408 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:20:23.049553    6408 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:20:23.049579    6408 cni.go:84] Creating CNI manager for ""
	I0913 12:20:23.049611    6408 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:20:23.049630    6408 start.go:340] cluster config:
	{Name:no-preload-560000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-560000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:20:23.052970    6408 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:23.064037    6408 out.go:177] * Starting "no-preload-560000" primary control-plane node in "no-preload-560000" cluster
	I0913 12:20:23.070092    6408 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:20:23.070183    6408 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/no-preload-560000/config.json ...
	I0913 12:20:23.070252    6408 cache.go:107] acquiring lock: {Name:mk14f647378e260c8321efa55709f8dcef94939d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:23.070262    6408 cache.go:107] acquiring lock: {Name:mk17f6d43c7206131d95df7c16bbacbac9092ee8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:23.070288    6408 cache.go:107] acquiring lock: {Name:mk2e9ad4c81128607e51f4969e9cb5de8fcd0f45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:23.070312    6408 cache.go:107] acquiring lock: {Name:mka8993a0c372eec4a2d32bcc7a759a47775b8f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:23.070323    6408 cache.go:107] acquiring lock: {Name:mkc71b299271afec674acc09f8716786baae05ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:23.070336    6408 cache.go:107] acquiring lock: {Name:mk254020bc5dfb914b8a262d8f871632b4d786b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:23.070401    6408 cache.go:115] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0913 12:20:23.070321    6408 cache.go:107] acquiring lock: {Name:mkb65d2451c7b3c342573204d131c065bccf052e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:23.070415    6408 cache.go:107] acquiring lock: {Name:mk235845be8d80e59ad58b1b3376d4ce5678b3b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:23.070456    6408 cache.go:115] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0913 12:20:23.070461    6408 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 149.75µs
	I0913 12:20:23.070467    6408 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0913 12:20:23.070409    6408 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 119.5µs
	I0913 12:20:23.070470    6408 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0913 12:20:23.070412    6408 cache.go:115] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0913 12:20:23.070489    6408 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 228.25µs
	I0913 12:20:23.070497    6408 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0913 12:20:23.070506    6408 cache.go:115] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0913 12:20:23.070527    6408 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 189.916µs
	I0913 12:20:23.070532    6408 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0913 12:20:23.070507    6408 cache.go:115] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0913 12:20:23.070538    6408 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 257.5µs
	I0913 12:20:23.070541    6408 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0913 12:20:23.070510    6408 cache.go:115] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0913 12:20:23.070544    6408 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 222.166µs
	I0913 12:20:23.070547    6408 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0913 12:20:23.070545    6408 cache.go:115] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0913 12:20:23.070547    6408 start.go:360] acquireMachinesLock for no-preload-560000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:23.070551    6408 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 315.417µs
	I0913 12:20:23.070555    6408 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0913 12:20:23.070583    6408 start.go:364] duration metric: took 31.709µs to acquireMachinesLock for "no-preload-560000"
	I0913 12:20:23.070583    6408 cache.go:115] /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0913 12:20:23.070593    6408 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:20:23.070598    6408 fix.go:54] fixHost starting: 
	I0913 12:20:23.070597    6408 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 236.792µs
	I0913 12:20:23.070602    6408 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0913 12:20:23.070606    6408 cache.go:87] Successfully saved all images to host disk.
	I0913 12:20:23.070716    6408 fix.go:112] recreateIfNeeded on no-preload-560000: state=Stopped err=<nil>
	W0913 12:20:23.070725    6408 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:20:23.083067    6408 out.go:177] * Restarting existing qemu2 VM for "no-preload-560000" ...
	I0913 12:20:23.090107    6408 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:23.090138    6408 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:fc:ab:88:e9:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/disk.qcow2
	I0913 12:20:23.091915    6408 main.go:141] libmachine: STDOUT: 
	I0913 12:20:23.091934    6408 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:23.091964    6408 fix.go:56] duration metric: took 21.365042ms for fixHost
	I0913 12:20:23.091968    6408 start.go:83] releasing machines lock for "no-preload-560000", held for 21.379792ms
	W0913 12:20:23.091974    6408 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:20:23.092019    6408 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:23.092023    6408 start.go:729] Will try again in 5 seconds ...
	I0913 12:20:28.094122    6408 start.go:360] acquireMachinesLock for no-preload-560000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:28.094606    6408 start.go:364] duration metric: took 384.708µs to acquireMachinesLock for "no-preload-560000"
	I0913 12:20:28.094730    6408 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:20:28.094749    6408 fix.go:54] fixHost starting: 
	I0913 12:20:28.095512    6408 fix.go:112] recreateIfNeeded on no-preload-560000: state=Stopped err=<nil>
	W0913 12:20:28.095537    6408 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:20:28.117168    6408 out.go:177] * Restarting existing qemu2 VM for "no-preload-560000" ...
	I0913 12:20:28.122007    6408 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:28.122370    6408 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:fc:ab:88:e9:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/no-preload-560000/disk.qcow2
	I0913 12:20:28.132106    6408 main.go:141] libmachine: STDOUT: 
	I0913 12:20:28.132175    6408 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:28.132304    6408 fix.go:56] duration metric: took 37.554041ms for fixHost
	I0913 12:20:28.132333    6408 start.go:83] releasing machines lock for "no-preload-560000", held for 37.704125ms
	W0913 12:20:28.132556    6408 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-560000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-560000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:28.139986    6408 out.go:201] 
	W0913 12:20:28.143040    6408 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:20:28.143066    6408 out.go:270] * 
	* 
	W0913 12:20:28.145401    6408 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:20:28.154766    6408 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-560000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000: exit status 7 (68.841125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-560000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-085000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-085000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.988534333s)

                                                
                                                
-- stdout --
	* [embed-certs-085000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-085000" primary control-plane node in "embed-certs-085000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-085000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:20:23.143390    6415 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:20:23.143519    6415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:23.143523    6415 out.go:358] Setting ErrFile to fd 2...
	I0913 12:20:23.143526    6415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:23.143640    6415 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:20:23.144712    6415 out.go:352] Setting JSON to false
	I0913 12:20:23.160971    6415 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4786,"bootTime":1726250437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:20:23.161040    6415 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:20:23.166131    6415 out.go:177] * [embed-certs-085000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:20:23.174083    6415 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:20:23.174146    6415 notify.go:220] Checking for updates...
	I0913 12:20:23.181008    6415 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:20:23.184168    6415 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:20:23.187155    6415 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:20:23.188683    6415 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:20:23.192080    6415 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:20:23.195418    6415 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:20:23.195498    6415 config.go:182] Loaded profile config "no-preload-560000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:20:23.195556    6415 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:20:23.198945    6415 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:20:23.206061    6415 start.go:297] selected driver: qemu2
	I0913 12:20:23.206068    6415 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:20:23.206076    6415 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:20:23.208466    6415 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:20:23.211174    6415 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:20:23.214260    6415 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:20:23.214282    6415 cni.go:84] Creating CNI manager for ""
	I0913 12:20:23.214314    6415 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:20:23.214325    6415 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 12:20:23.214364    6415 start.go:340] cluster config:
	{Name:embed-certs-085000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-085000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:20:23.218026    6415 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:23.224004    6415 out.go:177] * Starting "embed-certs-085000" primary control-plane node in "embed-certs-085000" cluster
	I0913 12:20:23.228124    6415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:20:23.228153    6415 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:20:23.228165    6415 cache.go:56] Caching tarball of preloaded images
	I0913 12:20:23.228225    6415 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:20:23.228230    6415 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:20:23.228297    6415 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/embed-certs-085000/config.json ...
	I0913 12:20:23.228307    6415 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/embed-certs-085000/config.json: {Name:mk5fa7b1fb0fd087d0bd2a898c3e35a3ef133232 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:20:23.228509    6415 start.go:360] acquireMachinesLock for embed-certs-085000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:23.228539    6415 start.go:364] duration metric: took 24.917µs to acquireMachinesLock for "embed-certs-085000"
	I0913 12:20:23.228552    6415 start.go:93] Provisioning new machine with config: &{Name:embed-certs-085000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-085000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:20:23.228577    6415 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:20:23.236094    6415 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:20:23.252938    6415 start.go:159] libmachine.API.Create for "embed-certs-085000" (driver="qemu2")
	I0913 12:20:23.252965    6415 client.go:168] LocalClient.Create starting
	I0913 12:20:23.253037    6415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:20:23.253066    6415 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:23.253075    6415 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:23.253110    6415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:20:23.253134    6415 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:23.253146    6415 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:23.253491    6415 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:20:23.442482    6415 main.go:141] libmachine: Creating SSH key...
	I0913 12:20:23.552893    6415 main.go:141] libmachine: Creating Disk image...
	I0913 12:20:23.552902    6415 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:20:23.553090    6415 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/disk.qcow2
	I0913 12:20:23.562457    6415 main.go:141] libmachine: STDOUT: 
	I0913 12:20:23.562476    6415 main.go:141] libmachine: STDERR: 
	I0913 12:20:23.562536    6415 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/disk.qcow2 +20000M
	I0913 12:20:23.570300    6415 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:20:23.570312    6415 main.go:141] libmachine: STDERR: 
	I0913 12:20:23.570327    6415 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/disk.qcow2
	I0913 12:20:23.570330    6415 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:20:23.570340    6415 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:23.570368    6415 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:2a:ec:69:fd:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/disk.qcow2
	I0913 12:20:23.571933    6415 main.go:141] libmachine: STDOUT: 
	I0913 12:20:23.571950    6415 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:23.571975    6415 client.go:171] duration metric: took 319.012084ms to LocalClient.Create
	I0913 12:20:25.574146    6415 start.go:128] duration metric: took 2.345607042s to createHost
	I0913 12:20:25.574195    6415 start.go:83] releasing machines lock for "embed-certs-085000", held for 2.345705833s
	W0913 12:20:25.574245    6415 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:25.594421    6415 out.go:177] * Deleting "embed-certs-085000" in qemu2 ...
	W0913 12:20:25.625317    6415 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:25.625336    6415 start.go:729] Will try again in 5 seconds ...
	I0913 12:20:30.627523    6415 start.go:360] acquireMachinesLock for embed-certs-085000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:31.219576    6415 start.go:364] duration metric: took 591.92875ms to acquireMachinesLock for "embed-certs-085000"
	I0913 12:20:31.219727    6415 start.go:93] Provisioning new machine with config: &{Name:embed-certs-085000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-085000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:20:31.220049    6415 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:20:31.234684    6415 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:20:31.284964    6415 start.go:159] libmachine.API.Create for "embed-certs-085000" (driver="qemu2")
	I0913 12:20:31.285009    6415 client.go:168] LocalClient.Create starting
	I0913 12:20:31.285138    6415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:20:31.285201    6415 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:31.285215    6415 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:31.285287    6415 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:20:31.285332    6415 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:31.285342    6415 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:31.286007    6415 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:20:31.478236    6415 main.go:141] libmachine: Creating SSH key...
	I0913 12:20:32.032501    6415 main.go:141] libmachine: Creating Disk image...
	I0913 12:20:32.032517    6415 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:20:32.032775    6415 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/disk.qcow2
	I0913 12:20:32.042802    6415 main.go:141] libmachine: STDOUT: 
	I0913 12:20:32.042828    6415 main.go:141] libmachine: STDERR: 
	I0913 12:20:32.042914    6415 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/disk.qcow2 +20000M
	I0913 12:20:32.050815    6415 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:20:32.050829    6415 main.go:141] libmachine: STDERR: 
	I0913 12:20:32.050841    6415 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/disk.qcow2
	I0913 12:20:32.050847    6415 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:20:32.050856    6415 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:32.050891    6415 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:65:13:76:49:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/disk.qcow2
	I0913 12:20:32.052465    6415 main.go:141] libmachine: STDOUT: 
	I0913 12:20:32.052477    6415 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:32.052493    6415 client.go:171] duration metric: took 767.499291ms to LocalClient.Create
	I0913 12:20:34.053975    6415 start.go:128] duration metric: took 2.833957708s to createHost
	I0913 12:20:34.054036    6415 start.go:83] releasing machines lock for "embed-certs-085000", held for 2.834496625s
	W0913 12:20:34.054468    6415 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-085000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-085000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:34.070048    6415 out.go:201] 
	W0913 12:20:34.075082    6415 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:20:34.075144    6415 out.go:270] * 
	* 
	W0913 12:20:34.077859    6415 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:20:34.088134    6415 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-085000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000: exit status 7 (64.763292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-085000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-560000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000: exit status 7 (32.220208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-560000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-560000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-560000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-560000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.366417ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-560000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-560000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000: exit status 7 (29.3075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-560000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-560000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000: exit status 7 (29.19025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-560000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-560000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-560000 --alsologtostderr -v=1: exit status 83 (38.291291ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-560000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-560000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:20:28.425208    6437 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:20:28.425353    6437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:28.425357    6437 out.go:358] Setting ErrFile to fd 2...
	I0913 12:20:28.425359    6437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:28.425477    6437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:20:28.425702    6437 out.go:352] Setting JSON to false
	I0913 12:20:28.425706    6437 mustload.go:65] Loading cluster: no-preload-560000
	I0913 12:20:28.425905    6437 config.go:182] Loaded profile config "no-preload-560000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:20:28.428835    6437 out.go:177] * The control-plane node no-preload-560000 host is not running: state=Stopped
	I0913 12:20:28.431933    6437 out.go:177]   To start a cluster, run: "minikube start -p no-preload-560000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-560000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000: exit status 7 (28.964709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-560000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000: exit status 7 (28.489125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-560000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-923000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-923000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.893080458s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-923000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-923000" primary control-plane node in "default-k8s-diff-port-923000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-923000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:20:28.842959    6461 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:20:28.843087    6461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:28.843090    6461 out.go:358] Setting ErrFile to fd 2...
	I0913 12:20:28.843093    6461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:28.843235    6461 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:20:28.844304    6461 out.go:352] Setting JSON to false
	I0913 12:20:28.860426    6461 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4791,"bootTime":1726250437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:20:28.860495    6461 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:20:28.864883    6461 out.go:177] * [default-k8s-diff-port-923000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:20:28.872854    6461 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:20:28.872912    6461 notify.go:220] Checking for updates...
	I0913 12:20:28.877306    6461 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:20:28.880898    6461 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:20:28.883890    6461 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:20:28.886925    6461 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:20:28.889853    6461 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:20:28.893244    6461 config.go:182] Loaded profile config "embed-certs-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:20:28.893308    6461 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:20:28.893354    6461 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:20:28.897907    6461 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:20:28.904820    6461 start.go:297] selected driver: qemu2
	I0913 12:20:28.904826    6461 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:20:28.904832    6461 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:20:28.907078    6461 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 12:20:28.909872    6461 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:20:28.912997    6461 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:20:28.913016    6461 cni.go:84] Creating CNI manager for ""
	I0913 12:20:28.913039    6461 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:20:28.913046    6461 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 12:20:28.913082    6461 start.go:340] cluster config:
	{Name:default-k8s-diff-port-923000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-923000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:20:28.916767    6461 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:28.923680    6461 out.go:177] * Starting "default-k8s-diff-port-923000" primary control-plane node in "default-k8s-diff-port-923000" cluster
	I0913 12:20:28.927871    6461 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:20:28.927884    6461 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:20:28.927894    6461 cache.go:56] Caching tarball of preloaded images
	I0913 12:20:28.927947    6461 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:20:28.927953    6461 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:20:28.928018    6461 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/default-k8s-diff-port-923000/config.json ...
	I0913 12:20:28.928030    6461 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/default-k8s-diff-port-923000/config.json: {Name:mk1714c55fc2e5a72b7f518afb9419c5a372fc87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:20:28.928420    6461 start.go:360] acquireMachinesLock for default-k8s-diff-port-923000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:28.928457    6461 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "default-k8s-diff-port-923000"
	I0913 12:20:28.928468    6461 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-923000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-923000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:20:28.928495    6461 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:20:28.934815    6461 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:20:28.952234    6461 start.go:159] libmachine.API.Create for "default-k8s-diff-port-923000" (driver="qemu2")
	I0913 12:20:28.952278    6461 client.go:168] LocalClient.Create starting
	I0913 12:20:28.952346    6461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:20:28.952396    6461 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:28.952404    6461 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:28.952446    6461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:20:28.952472    6461 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:28.952479    6461 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:28.952821    6461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:20:29.112802    6461 main.go:141] libmachine: Creating SSH key...
	I0913 12:20:29.198085    6461 main.go:141] libmachine: Creating Disk image...
	I0913 12:20:29.198095    6461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:20:29.198299    6461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/disk.qcow2
	I0913 12:20:29.207466    6461 main.go:141] libmachine: STDOUT: 
	I0913 12:20:29.207486    6461 main.go:141] libmachine: STDERR: 
	I0913 12:20:29.207551    6461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/disk.qcow2 +20000M
	I0913 12:20:29.215402    6461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:20:29.215486    6461 main.go:141] libmachine: STDERR: 
	I0913 12:20:29.215502    6461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/disk.qcow2
	I0913 12:20:29.215507    6461 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:20:29.215517    6461 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:29.215549    6461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:d0:a1:3d:6f:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/disk.qcow2
	I0913 12:20:29.217162    6461 main.go:141] libmachine: STDOUT: 
	I0913 12:20:29.217176    6461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:29.217195    6461 client.go:171] duration metric: took 264.917792ms to LocalClient.Create
	I0913 12:20:31.219328    6461 start.go:128] duration metric: took 2.290871375s to createHost
	I0913 12:20:31.219399    6461 start.go:83] releasing machines lock for "default-k8s-diff-port-923000", held for 2.290992s
	W0913 12:20:31.219448    6461 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:31.249673    6461 out.go:177] * Deleting "default-k8s-diff-port-923000" in qemu2 ...
	W0913 12:20:31.275733    6461 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:31.275751    6461 start.go:729] Will try again in 5 seconds ...
	I0913 12:20:36.277840    6461 start.go:360] acquireMachinesLock for default-k8s-diff-port-923000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:36.278099    6461 start.go:364] duration metric: took 186.333µs to acquireMachinesLock for "default-k8s-diff-port-923000"
	I0913 12:20:36.278146    6461 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-923000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-923000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:20:36.278347    6461 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:20:36.286838    6461 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:20:36.330798    6461 start.go:159] libmachine.API.Create for "default-k8s-diff-port-923000" (driver="qemu2")
	I0913 12:20:36.330861    6461 client.go:168] LocalClient.Create starting
	I0913 12:20:36.330987    6461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:20:36.331038    6461 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:36.331055    6461 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:36.331125    6461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:20:36.331159    6461 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:36.331179    6461 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:36.331833    6461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:20:36.502413    6461 main.go:141] libmachine: Creating SSH key...
	I0913 12:20:36.644824    6461 main.go:141] libmachine: Creating Disk image...
	I0913 12:20:36.644835    6461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:20:36.645034    6461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/disk.qcow2
	I0913 12:20:36.654551    6461 main.go:141] libmachine: STDOUT: 
	I0913 12:20:36.654574    6461 main.go:141] libmachine: STDERR: 
	I0913 12:20:36.654637    6461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/disk.qcow2 +20000M
	I0913 12:20:36.662415    6461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:20:36.662436    6461 main.go:141] libmachine: STDERR: 
	I0913 12:20:36.662448    6461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/disk.qcow2
	I0913 12:20:36.662454    6461 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:20:36.662462    6461 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:36.662496    6461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:95:71:90:1a:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/disk.qcow2
	I0913 12:20:36.664085    6461 main.go:141] libmachine: STDOUT: 
	I0913 12:20:36.664103    6461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:36.664116    6461 client.go:171] duration metric: took 333.2595ms to LocalClient.Create
	I0913 12:20:38.666184    6461 start.go:128] duration metric: took 2.387886125s to createHost
	I0913 12:20:38.666222    6461 start.go:83] releasing machines lock for "default-k8s-diff-port-923000", held for 2.388173208s
	W0913 12:20:38.666437    6461 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-923000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-923000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:38.676763    6461 out.go:201] 
	W0913 12:20:38.684829    6461 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:20:38.684866    6461 out.go:270] * 
	* 
	W0913 12:20:38.686530    6461 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:20:38.695710    6461 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-923000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000: exit status 7 (55.474958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-923000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-085000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-085000 create -f testdata/busybox.yaml: exit status 1 (29.294708ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-085000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-085000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000: exit status 7 (29.098792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-085000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000: exit status 7 (29.051959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-085000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-085000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-085000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-085000 describe deploy/metrics-server -n kube-system: exit status 1 (26.362917ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-085000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-085000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000: exit status 7 (29.09ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-085000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-085000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-085000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.326754208s)

                                                
                                                
-- stdout --
	* [embed-certs-085000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-085000" primary control-plane node in "embed-certs-085000" cluster
	* Restarting existing qemu2 VM for "embed-certs-085000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-085000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:20:38.452350    6515 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:20:38.452497    6515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:38.452500    6515 out.go:358] Setting ErrFile to fd 2...
	I0913 12:20:38.452504    6515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:38.452643    6515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:20:38.453677    6515 out.go:352] Setting JSON to false
	I0913 12:20:38.469741    6515 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4801,"bootTime":1726250437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:20:38.469813    6515 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:20:38.473513    6515 out.go:177] * [embed-certs-085000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:20:38.480584    6515 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:20:38.480635    6515 notify.go:220] Checking for updates...
	I0913 12:20:38.488533    6515 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:20:38.491576    6515 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:20:38.494547    6515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:20:38.497535    6515 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:20:38.500508    6515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:20:38.502223    6515 config.go:182] Loaded profile config "embed-certs-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:20:38.502525    6515 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:20:38.506491    6515 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 12:20:38.513364    6515 start.go:297] selected driver: qemu2
	I0913 12:20:38.513378    6515 start.go:901] validating driver "qemu2" against &{Name:embed-certs-085000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-085000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:20:38.513443    6515 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:20:38.515615    6515 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:20:38.515645    6515 cni.go:84] Creating CNI manager for ""
	I0913 12:20:38.515667    6515 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:20:38.515684    6515 start.go:340] cluster config:
	{Name:embed-certs-085000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-085000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:20:38.518980    6515 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:38.526574    6515 out.go:177] * Starting "embed-certs-085000" primary control-plane node in "embed-certs-085000" cluster
	I0913 12:20:38.530510    6515 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:20:38.530528    6515 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:20:38.530540    6515 cache.go:56] Caching tarball of preloaded images
	I0913 12:20:38.530599    6515 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:20:38.530605    6515 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:20:38.530670    6515 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/embed-certs-085000/config.json ...
	I0913 12:20:38.531184    6515 start.go:360] acquireMachinesLock for embed-certs-085000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:38.666302    6515 start.go:364] duration metric: took 135.087334ms to acquireMachinesLock for "embed-certs-085000"
	I0913 12:20:38.666334    6515 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:20:38.666351    6515 fix.go:54] fixHost starting: 
	I0913 12:20:38.666737    6515 fix.go:112] recreateIfNeeded on embed-certs-085000: state=Stopped err=<nil>
	W0913 12:20:38.666767    6515 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:20:38.673685    6515 out.go:177] * Restarting existing qemu2 VM for "embed-certs-085000" ...
	I0913 12:20:38.680727    6515 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:38.680878    6515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:65:13:76:49:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/disk.qcow2
	I0913 12:20:38.687646    6515 main.go:141] libmachine: STDOUT: 
	I0913 12:20:38.687698    6515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:38.687791    6515 fix.go:56] duration metric: took 21.433ms for fixHost
	I0913 12:20:38.687803    6515 start.go:83] releasing machines lock for "embed-certs-085000", held for 21.487875ms
	W0913 12:20:38.687822    6515 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:20:38.687932    6515 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:38.687945    6515 start.go:729] Will try again in 5 seconds ...
	I0913 12:20:43.690104    6515 start.go:360] acquireMachinesLock for embed-certs-085000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:43.690655    6515 start.go:364] duration metric: took 429.75µs to acquireMachinesLock for "embed-certs-085000"
	I0913 12:20:43.690790    6515 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:20:43.690810    6515 fix.go:54] fixHost starting: 
	I0913 12:20:43.691571    6515 fix.go:112] recreateIfNeeded on embed-certs-085000: state=Stopped err=<nil>
	W0913 12:20:43.691600    6515 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:20:43.701088    6515 out.go:177] * Restarting existing qemu2 VM for "embed-certs-085000" ...
	I0913 12:20:43.705103    6515 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:43.705323    6515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:65:13:76:49:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/embed-certs-085000/disk.qcow2
	I0913 12:20:43.714605    6515 main.go:141] libmachine: STDOUT: 
	I0913 12:20:43.714682    6515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:43.714787    6515 fix.go:56] duration metric: took 23.973666ms for fixHost
	I0913 12:20:43.714813    6515 start.go:83] releasing machines lock for "embed-certs-085000", held for 24.134166ms
	W0913 12:20:43.714984    6515 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-085000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-085000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:43.724005    6515 out.go:201] 
	W0913 12:20:43.728140    6515 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:20:43.728170    6515 out.go:270] * 
	* 
	W0913 12:20:43.730730    6515 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:20:43.737953    6515 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-085000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000: exit status 7 (69.472459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-085000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-923000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-923000 create -f testdata/busybox.yaml: exit status 1 (29.149708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-923000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-923000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000: exit status 7 (29.871292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-923000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000: exit status 7 (29.064375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-923000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-923000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-923000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-923000 describe deploy/metrics-server -n kube-system: exit status 1 (26.007583ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-923000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-923000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000: exit status 7 (29.079084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-923000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-923000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-923000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.190444709s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-923000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-923000" primary control-plane node in "default-k8s-diff-port-923000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-923000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-923000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:20:42.432492    6556 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:20:42.432621    6556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:42.432624    6556 out.go:358] Setting ErrFile to fd 2...
	I0913 12:20:42.432627    6556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:42.432757    6556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:20:42.433772    6556 out.go:352] Setting JSON to false
	I0913 12:20:42.449695    6556 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4805,"bootTime":1726250437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:20:42.449770    6556 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:20:42.454910    6556 out.go:177] * [default-k8s-diff-port-923000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:20:42.461842    6556 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:20:42.461920    6556 notify.go:220] Checking for updates...
	I0913 12:20:42.468888    6556 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:20:42.471849    6556 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:20:42.474924    6556 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:20:42.477833    6556 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:20:42.480899    6556 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:20:42.484142    6556 config.go:182] Loaded profile config "default-k8s-diff-port-923000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:20:42.484424    6556 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:20:42.488889    6556 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 12:20:42.495889    6556 start.go:297] selected driver: qemu2
	I0913 12:20:42.495895    6556 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-923000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-923000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:20:42.495968    6556 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:20:42.498229    6556 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 12:20:42.498253    6556 cni.go:84] Creating CNI manager for ""
	I0913 12:20:42.498276    6556 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:20:42.498297    6556 start.go:340] cluster config:
	{Name:default-k8s-diff-port-923000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-923000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:20:42.501717    6556 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:42.508882    6556 out.go:177] * Starting "default-k8s-diff-port-923000" primary control-plane node in "default-k8s-diff-port-923000" cluster
	I0913 12:20:42.512821    6556 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:20:42.512835    6556 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:20:42.512847    6556 cache.go:56] Caching tarball of preloaded images
	I0913 12:20:42.512909    6556 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:20:42.512914    6556 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:20:42.512975    6556 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/default-k8s-diff-port-923000/config.json ...
	I0913 12:20:42.513483    6556 start.go:360] acquireMachinesLock for default-k8s-diff-port-923000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:42.513518    6556 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "default-k8s-diff-port-923000"
	I0913 12:20:42.513533    6556 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:20:42.513539    6556 fix.go:54] fixHost starting: 
	I0913 12:20:42.513657    6556 fix.go:112] recreateIfNeeded on default-k8s-diff-port-923000: state=Stopped err=<nil>
	W0913 12:20:42.513665    6556 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:20:42.516948    6556 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-923000" ...
	I0913 12:20:42.523926    6556 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:42.523967    6556 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:95:71:90:1a:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/disk.qcow2
	I0913 12:20:42.525876    6556 main.go:141] libmachine: STDOUT: 
	I0913 12:20:42.525889    6556 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:42.525918    6556 fix.go:56] duration metric: took 12.378292ms for fixHost
	I0913 12:20:42.525922    6556 start.go:83] releasing machines lock for "default-k8s-diff-port-923000", held for 12.400042ms
	W0913 12:20:42.525927    6556 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:20:42.525960    6556 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:42.525965    6556 start.go:729] Will try again in 5 seconds ...
	I0913 12:20:47.528074    6556 start.go:360] acquireMachinesLock for default-k8s-diff-port-923000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:47.528477    6556 start.go:364] duration metric: took 325.917µs to acquireMachinesLock for "default-k8s-diff-port-923000"
	I0913 12:20:47.528592    6556 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:20:47.528613    6556 fix.go:54] fixHost starting: 
	I0913 12:20:47.529381    6556 fix.go:112] recreateIfNeeded on default-k8s-diff-port-923000: state=Stopped err=<nil>
	W0913 12:20:47.529407    6556 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:20:47.534850    6556 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-923000" ...
	I0913 12:20:47.552023    6556 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:47.552241    6556 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:95:71:90:1a:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/default-k8s-diff-port-923000/disk.qcow2
	I0913 12:20:47.561836    6556 main.go:141] libmachine: STDOUT: 
	I0913 12:20:47.561933    6556 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:47.562022    6556 fix.go:56] duration metric: took 33.408042ms for fixHost
	I0913 12:20:47.562045    6556 start.go:83] releasing machines lock for "default-k8s-diff-port-923000", held for 33.546459ms
	W0913 12:20:47.562347    6556 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-923000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-923000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:47.570727    6556 out.go:201] 
	W0913 12:20:47.573814    6556 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:20:47.573859    6556 out.go:270] * 
	* 
	W0913 12:20:47.576362    6556 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:20:47.584769    6556 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-923000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000: exit status 7 (67.345459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-923000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-085000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000: exit status 7 (32.5285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-085000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-085000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-085000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-085000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.151209ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-085000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-085000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000: exit status 7 (28.2915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-085000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-085000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000: exit status 7 (29.394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-085000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-085000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-085000 --alsologtostderr -v=1: exit status 83 (40.084708ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-085000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-085000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:20:44.007149    6575 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:20:44.007310    6575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:44.007313    6575 out.go:358] Setting ErrFile to fd 2...
	I0913 12:20:44.007315    6575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:44.007452    6575 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:20:44.007671    6575 out.go:352] Setting JSON to false
	I0913 12:20:44.007676    6575 mustload.go:65] Loading cluster: embed-certs-085000
	I0913 12:20:44.007895    6575 config.go:182] Loaded profile config "embed-certs-085000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:20:44.012147    6575 out.go:177] * The control-plane node embed-certs-085000 host is not running: state=Stopped
	I0913 12:20:44.016094    6575 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-085000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-085000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000: exit status 7 (29.431084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-085000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000: exit status 7 (29.230084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-085000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-175000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-175000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.877416959s)

                                                
                                                
-- stdout --
	* [newest-cni-175000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-175000" primary control-plane node in "newest-cni-175000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-175000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:20:44.321623    6592 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:20:44.321753    6592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:44.321757    6592 out.go:358] Setting ErrFile to fd 2...
	I0913 12:20:44.321759    6592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:44.321886    6592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:20:44.322974    6592 out.go:352] Setting JSON to false
	I0913 12:20:44.338838    6592 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4807,"bootTime":1726250437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:20:44.338920    6592 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:20:44.344189    6592 out.go:177] * [newest-cni-175000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:20:44.351120    6592 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:20:44.351176    6592 notify.go:220] Checking for updates...
	I0913 12:20:44.357081    6592 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:20:44.360154    6592 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:20:44.368101    6592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:20:44.371146    6592 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:20:44.374083    6592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:20:44.377458    6592 config.go:182] Loaded profile config "default-k8s-diff-port-923000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:20:44.377519    6592 config.go:182] Loaded profile config "multinode-816000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:20:44.377575    6592 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:20:44.382127    6592 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 12:20:44.389123    6592 start.go:297] selected driver: qemu2
	I0913 12:20:44.389129    6592 start.go:901] validating driver "qemu2" against <nil>
	I0913 12:20:44.389136    6592 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:20:44.391474    6592 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0913 12:20:44.391518    6592 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0913 12:20:44.399066    6592 out.go:177] * Automatically selected the socket_vmnet network
	I0913 12:20:44.402233    6592 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0913 12:20:44.402261    6592 cni.go:84] Creating CNI manager for ""
	I0913 12:20:44.402288    6592 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:20:44.402292    6592 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 12:20:44.402320    6592 start.go:340] cluster config:
	{Name:newest-cni-175000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-175000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:20:44.405896    6592 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:44.413134    6592 out.go:177] * Starting "newest-cni-175000" primary control-plane node in "newest-cni-175000" cluster
	I0913 12:20:44.416000    6592 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:20:44.416013    6592 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:20:44.416023    6592 cache.go:56] Caching tarball of preloaded images
	I0913 12:20:44.416076    6592 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:20:44.416082    6592 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:20:44.416140    6592 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/newest-cni-175000/config.json ...
	I0913 12:20:44.416150    6592 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/newest-cni-175000/config.json: {Name:mkcbc159fc2210e085fa9525814c0f8474293a90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 12:20:44.416530    6592 start.go:360] acquireMachinesLock for newest-cni-175000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:44.416565    6592 start.go:364] duration metric: took 28.584µs to acquireMachinesLock for "newest-cni-175000"
	I0913 12:20:44.416575    6592 start.go:93] Provisioning new machine with config: &{Name:newest-cni-175000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-175000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:20:44.416605    6592 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:20:44.421113    6592 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:20:44.439567    6592 start.go:159] libmachine.API.Create for "newest-cni-175000" (driver="qemu2")
	I0913 12:20:44.439597    6592 client.go:168] LocalClient.Create starting
	I0913 12:20:44.439660    6592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:20:44.439689    6592 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:44.439700    6592 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:44.439739    6592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:20:44.439767    6592 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:44.439773    6592 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:44.440208    6592 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:20:44.601299    6592 main.go:141] libmachine: Creating SSH key...
	I0913 12:20:44.710281    6592 main.go:141] libmachine: Creating Disk image...
	I0913 12:20:44.710287    6592 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:20:44.710498    6592 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/disk.qcow2
	I0913 12:20:44.719738    6592 main.go:141] libmachine: STDOUT: 
	I0913 12:20:44.719758    6592 main.go:141] libmachine: STDERR: 
	I0913 12:20:44.719807    6592 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/disk.qcow2 +20000M
	I0913 12:20:44.727550    6592 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:20:44.727573    6592 main.go:141] libmachine: STDERR: 
	I0913 12:20:44.727584    6592 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/disk.qcow2
	I0913 12:20:44.727589    6592 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:20:44.727598    6592 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:44.727629    6592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:98:67:70:21:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/disk.qcow2
	I0913 12:20:44.729235    6592 main.go:141] libmachine: STDOUT: 
	I0913 12:20:44.729250    6592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:44.729270    6592 client.go:171] duration metric: took 289.6745ms to LocalClient.Create
	I0913 12:20:46.731390    6592 start.go:128] duration metric: took 2.314827834s to createHost
	I0913 12:20:46.731437    6592 start.go:83] releasing machines lock for "newest-cni-175000", held for 2.314926042s
	W0913 12:20:46.731490    6592 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:46.745845    6592 out.go:177] * Deleting "newest-cni-175000" in qemu2 ...
	W0913 12:20:46.780002    6592 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:46.780025    6592 start.go:729] Will try again in 5 seconds ...
	I0913 12:20:51.782181    6592 start.go:360] acquireMachinesLock for newest-cni-175000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:51.782817    6592 start.go:364] duration metric: took 490.5µs to acquireMachinesLock for "newest-cni-175000"
	I0913 12:20:51.783008    6592 start.go:93] Provisioning new machine with config: &{Name:newest-cni-175000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-175000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 12:20:51.783302    6592 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 12:20:51.787890    6592 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 12:20:51.835892    6592 start.go:159] libmachine.API.Create for "newest-cni-175000" (driver="qemu2")
	I0913 12:20:51.835946    6592 client.go:168] LocalClient.Create starting
	I0913 12:20:51.836084    6592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/ca.pem
	I0913 12:20:51.836150    6592 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:51.836168    6592 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:51.836246    6592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19636-1170/.minikube/certs/cert.pem
	I0913 12:20:51.836292    6592 main.go:141] libmachine: Decoding PEM data...
	I0913 12:20:51.836307    6592 main.go:141] libmachine: Parsing certificate...
	I0913 12:20:51.836840    6592 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso...
	I0913 12:20:52.007102    6592 main.go:141] libmachine: Creating SSH key...
	I0913 12:20:52.105576    6592 main.go:141] libmachine: Creating Disk image...
	I0913 12:20:52.105586    6592 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 12:20:52.105803    6592 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/disk.qcow2.raw /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/disk.qcow2
	I0913 12:20:52.115106    6592 main.go:141] libmachine: STDOUT: 
	I0913 12:20:52.115130    6592 main.go:141] libmachine: STDERR: 
	I0913 12:20:52.115185    6592 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/disk.qcow2 +20000M
	I0913 12:20:52.123034    6592 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 12:20:52.123053    6592 main.go:141] libmachine: STDERR: 
	I0913 12:20:52.123067    6592 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/disk.qcow2
	I0913 12:20:52.123072    6592 main.go:141] libmachine: Starting QEMU VM...
	I0913 12:20:52.123079    6592 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:52.123116    6592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:29:48:3c:af:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/disk.qcow2
	I0913 12:20:52.124711    6592 main.go:141] libmachine: STDOUT: 
	I0913 12:20:52.124723    6592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:52.124736    6592 client.go:171] duration metric: took 288.792375ms to LocalClient.Create
	I0913 12:20:54.126867    6592 start.go:128] duration metric: took 2.343592417s to createHost
	I0913 12:20:54.126931    6592 start.go:83] releasing machines lock for "newest-cni-175000", held for 2.344124542s
	W0913 12:20:54.127347    6592 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-175000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-175000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:54.135928    6592 out.go:201] 
	W0913 12:20:54.146104    6592 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:20:54.146125    6592 out.go:270] * 
	* 
	W0913 12:20:54.148167    6592 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:20:54.158051    6592 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-175000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-175000 -n newest-cni-175000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-175000 -n newest-cni-175000: exit status 7 (69.781584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-175000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-923000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000: exit status 7 (32.081666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-923000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-923000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-923000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-923000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.506041ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-923000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-923000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000: exit status 7 (29.206625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-923000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-923000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000: exit status 7 (28.544708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-923000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-923000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-923000 --alsologtostderr -v=1: exit status 83 (40.061917ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-923000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-923000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:20:47.851688    6617 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:20:47.851854    6617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:47.851857    6617 out.go:358] Setting ErrFile to fd 2...
	I0913 12:20:47.851860    6617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:47.851998    6617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:20:47.852213    6617 out.go:352] Setting JSON to false
	I0913 12:20:47.852221    6617 mustload.go:65] Loading cluster: default-k8s-diff-port-923000
	I0913 12:20:47.852461    6617 config.go:182] Loaded profile config "default-k8s-diff-port-923000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:20:47.856079    6617 out.go:177] * The control-plane node default-k8s-diff-port-923000 host is not running: state=Stopped
	I0913 12:20:47.859959    6617 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-923000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-923000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000: exit status 7 (29.201958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-923000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000: exit status 7 (29.251ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-923000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-175000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-175000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.187629708s)

                                                
                                                
-- stdout --
	* [newest-cni-175000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-175000" primary control-plane node in "newest-cni-175000" cluster
	* Restarting existing qemu2 VM for "newest-cni-175000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-175000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:20:57.973759    6667 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:20:57.973866    6667 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:57.973870    6667 out.go:358] Setting ErrFile to fd 2...
	I0913 12:20:57.973872    6667 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:20:57.974009    6667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:20:57.975067    6667 out.go:352] Setting JSON to false
	I0913 12:20:57.991146    6667 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4820,"bootTime":1726250437,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 12:20:57.991213    6667 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 12:20:57.995688    6667 out.go:177] * [newest-cni-175000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 12:20:58.003711    6667 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 12:20:58.003754    6667 notify.go:220] Checking for updates...
	I0913 12:20:58.011626    6667 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 12:20:58.014771    6667 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 12:20:58.017661    6667 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 12:20:58.020695    6667 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 12:20:58.023620    6667 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 12:20:58.026902    6667 config.go:182] Loaded profile config "newest-cni-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:20:58.027152    6667 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 12:20:58.030728    6667 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 12:20:58.037661    6667 start.go:297] selected driver: qemu2
	I0913 12:20:58.037666    6667 start.go:901] validating driver "qemu2" against &{Name:newest-cni-175000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-175000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:20:58.037710    6667 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 12:20:58.040191    6667 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0913 12:20:58.040220    6667 cni.go:84] Creating CNI manager for ""
	I0913 12:20:58.040244    6667 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 12:20:58.040294    6667 start.go:340] cluster config:
	{Name:newest-cni-175000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-175000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 12:20:58.044016    6667 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 12:20:58.051614    6667 out.go:177] * Starting "newest-cni-175000" primary control-plane node in "newest-cni-175000" cluster
	I0913 12:20:58.055711    6667 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 12:20:58.055727    6667 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 12:20:58.055740    6667 cache.go:56] Caching tarball of preloaded images
	I0913 12:20:58.055809    6667 preload.go:172] Found /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 12:20:58.055815    6667 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 12:20:58.055882    6667 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/newest-cni-175000/config.json ...
	I0913 12:20:58.056383    6667 start.go:360] acquireMachinesLock for newest-cni-175000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:20:58.056412    6667 start.go:364] duration metric: took 22.75µs to acquireMachinesLock for "newest-cni-175000"
	I0913 12:20:58.056420    6667 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:20:58.056426    6667 fix.go:54] fixHost starting: 
	I0913 12:20:58.056544    6667 fix.go:112] recreateIfNeeded on newest-cni-175000: state=Stopped err=<nil>
	W0913 12:20:58.056552    6667 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:20:58.061695    6667 out.go:177] * Restarting existing qemu2 VM for "newest-cni-175000" ...
	I0913 12:20:58.069632    6667 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:20:58.069674    6667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:29:48:3c:af:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/disk.qcow2
	I0913 12:20:58.071662    6667 main.go:141] libmachine: STDOUT: 
	I0913 12:20:58.071681    6667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:20:58.071711    6667 fix.go:56] duration metric: took 15.284458ms for fixHost
	I0913 12:20:58.071717    6667 start.go:83] releasing machines lock for "newest-cni-175000", held for 15.301ms
	W0913 12:20:58.071728    6667 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:20:58.071782    6667 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:20:58.071787    6667 start.go:729] Will try again in 5 seconds ...
	I0913 12:21:03.073770    6667 start.go:360] acquireMachinesLock for newest-cni-175000: {Name:mk45e0be455241ca06f56c58de88f38971c18da1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 12:21:03.074228    6667 start.go:364] duration metric: took 380.208µs to acquireMachinesLock for "newest-cni-175000"
	I0913 12:21:03.074356    6667 start.go:96] Skipping create...Using existing machine configuration
	I0913 12:21:03.074375    6667 fix.go:54] fixHost starting: 
	I0913 12:21:03.075119    6667 fix.go:112] recreateIfNeeded on newest-cni-175000: state=Stopped err=<nil>
	W0913 12:21:03.075146    6667 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 12:21:03.083710    6667 out.go:177] * Restarting existing qemu2 VM for "newest-cni-175000" ...
	I0913 12:21:03.086658    6667 qemu.go:418] Using hvf for hardware acceleration
	I0913 12:21:03.086881    6667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:29:48:3c:af:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19636-1170/.minikube/machines/newest-cni-175000/disk.qcow2
	I0913 12:21:03.096461    6667 main.go:141] libmachine: STDOUT: 
	I0913 12:21:03.096546    6667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 12:21:03.096673    6667 fix.go:56] duration metric: took 22.274917ms for fixHost
	I0913 12:21:03.096700    6667 start.go:83] releasing machines lock for "newest-cni-175000", held for 22.448375ms
	W0913 12:21:03.096898    6667 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-175000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-175000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 12:21:03.103815    6667 out.go:201] 
	W0913 12:21:03.107763    6667 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 12:21:03.107788    6667 out.go:270] * 
	* 
	W0913 12:21:03.110450    6667 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 12:21:03.120702    6667 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-175000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-175000 -n newest-cni-175000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-175000 -n newest-cni-175000: exit status 7 (67.138792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-175000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-175000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-175000 -n newest-cni-175000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-175000 -n newest-cni-175000: exit status 7 (29.939625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-175000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-175000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-175000 --alsologtostderr -v=1: exit status 83 (40.799792ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-175000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-175000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 12:21:03.304851    6681 out.go:345] Setting OutFile to fd 1 ...
	I0913 12:21:03.305002    6681 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:21:03.305009    6681 out.go:358] Setting ErrFile to fd 2...
	I0913 12:21:03.305012    6681 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 12:21:03.305137    6681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 12:21:03.305347    6681 out.go:352] Setting JSON to false
	I0913 12:21:03.305352    6681 mustload.go:65] Loading cluster: newest-cni-175000
	I0913 12:21:03.305555    6681 config.go:182] Loaded profile config "newest-cni-175000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 12:21:03.308687    6681 out.go:177] * The control-plane node newest-cni-175000 host is not running: state=Stopped
	I0913 12:21:03.312687    6681 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-175000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-175000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-175000 -n newest-cni-175000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-175000 -n newest-cni-175000: exit status 7 (30.334167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-175000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-175000 -n newest-cni-175000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-175000 -n newest-cni-175000: exit status 7 (30.387167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-175000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (155/273)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 12.58
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.1
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.31
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 200.11
29 TestAddons/serial/Volcano 38.25
31 TestAddons/serial/GCPAuth/Namespaces 0.09
34 TestAddons/parallel/Ingress 19.11
35 TestAddons/parallel/InspektorGadget 10.23
36 TestAddons/parallel/MetricsServer 5.26
38 TestAddons/parallel/CSI 48.25
39 TestAddons/parallel/Headlamp 16.62
40 TestAddons/parallel/CloudSpanner 5.17
41 TestAddons/parallel/LocalPath 40.96
42 TestAddons/parallel/NvidiaDevicePlugin 6.19
43 TestAddons/parallel/Yakd 11.41
44 TestAddons/StoppedEnableDisable 12.39
52 TestHyperKitDriverInstallOrUpdate 10.62
55 TestErrorSpam/setup 34.25
56 TestErrorSpam/start 0.33
57 TestErrorSpam/status 0.26
58 TestErrorSpam/pause 0.74
59 TestErrorSpam/unpause 0.67
60 TestErrorSpam/stop 64.24
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 48
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 38
67 TestFunctional/serial/KubeContext 0.03
68 TestFunctional/serial/KubectlGetPods 0.05
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.9
72 TestFunctional/serial/CacheCmd/cache/add_local 1.72
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
74 TestFunctional/serial/CacheCmd/cache/list 0.03
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
76 TestFunctional/serial/CacheCmd/cache/cache_reload 0.68
77 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/serial/MinikubeKubectlCmd 0.82
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
80 TestFunctional/serial/ExtraConfig 37.25
81 TestFunctional/serial/ComponentHealth 0.04
82 TestFunctional/serial/LogsCmd 0.64
83 TestFunctional/serial/LogsFileCmd 0.64
84 TestFunctional/serial/InvalidService 4.16
86 TestFunctional/parallel/ConfigCmd 0.23
87 TestFunctional/parallel/DashboardCmd 10.2
88 TestFunctional/parallel/DryRun 0.23
89 TestFunctional/parallel/InternationalLanguage 0.11
90 TestFunctional/parallel/StatusCmd 0.25
95 TestFunctional/parallel/AddonsCmd 0.09
96 TestFunctional/parallel/PersistentVolumeClaim 24.54
98 TestFunctional/parallel/SSHCmd 0.14
99 TestFunctional/parallel/CpCmd 0.44
101 TestFunctional/parallel/FileSync 0.07
102 TestFunctional/parallel/CertSync 0.42
106 TestFunctional/parallel/NodeLabels 0.04
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.15
110 TestFunctional/parallel/License 0.33
111 TestFunctional/parallel/Version/short 0.04
112 TestFunctional/parallel/Version/components 0.18
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.09
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
117 TestFunctional/parallel/ImageCommands/ImageBuild 1.8
118 TestFunctional/parallel/ImageCommands/Setup 1.72
119 TestFunctional/parallel/DockerEnv/bash 0.33
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
123 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.14
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.17
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.18
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.11
136 TestFunctional/parallel/ServiceCmd/List 0.13
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
139 TestFunctional/parallel/ServiceCmd/Format 0.1
140 TestFunctional/parallel/ServiceCmd/URL 0.1
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
145 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
148 TestFunctional/parallel/ProfileCmd/profile_list 0.12
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
150 TestFunctional/parallel/MountCmd/any-port 5.73
151 TestFunctional/parallel/MountCmd/specific-port 0.87
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.96
153 TestFunctional/delete_echo-server_images 0.07
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 178.22
160 TestMultiControlPlane/serial/DeployApp 4.52
161 TestMultiControlPlane/serial/PingHostFromPods 0.73
162 TestMultiControlPlane/serial/AddWorkerNode 51.68
163 TestMultiControlPlane/serial/NodeLabels 0.12
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
165 TestMultiControlPlane/serial/CopyFile 4.28
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 78.08
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 3.23
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
211 TestMainNoArgs 0.03
258 TestStoppedBinaryUpgrade/Setup 0.87
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
275 TestNoKubernetes/serial/ProfileList 31.38
276 TestNoKubernetes/serial/Stop 2.08
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
288 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
293 TestStartStop/group/old-k8s-version/serial/Stop 3.22
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
304 TestStartStop/group/no-preload/serial/Stop 2.66
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
317 TestStartStop/group/embed-certs/serial/Stop 3.94
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.32
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
337 TestStartStop/group/newest-cni/serial/Stop 3.52
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-007000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-007000: exit status 85 (94.335917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-007000 | jenkins | v1.34.0 | 13 Sep 24 11:19 PDT |          |
	|         | -p download-only-007000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 11:19:51
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 11:19:51.756132    1697 out.go:345] Setting OutFile to fd 1 ...
	I0913 11:19:51.756300    1697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:19:51.756303    1697 out.go:358] Setting ErrFile to fd 2...
	I0913 11:19:51.756306    1697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:19:51.756424    1697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	W0913 11:19:51.756507    1697 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19636-1170/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19636-1170/.minikube/config/config.json: no such file or directory
	I0913 11:19:51.757769    1697 out.go:352] Setting JSON to true
	I0913 11:19:51.774677    1697 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1154,"bootTime":1726250437,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 11:19:51.774743    1697 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 11:19:51.780788    1697 out.go:97] [download-only-007000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 11:19:51.780961    1697 notify.go:220] Checking for updates...
	W0913 11:19:51.780984    1697 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 11:19:51.783616    1697 out.go:169] MINIKUBE_LOCATION=19636
	I0913 11:19:51.786756    1697 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 11:19:51.790776    1697 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 11:19:51.793646    1697 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 11:19:51.796651    1697 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	W0913 11:19:51.801183    1697 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 11:19:51.801352    1697 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 11:19:51.805721    1697 out.go:97] Using the qemu2 driver based on user configuration
	I0913 11:19:51.805741    1697 start.go:297] selected driver: qemu2
	I0913 11:19:51.805755    1697 start.go:901] validating driver "qemu2" against <nil>
	I0913 11:19:51.805823    1697 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 11:19:51.808657    1697 out.go:169] Automatically selected the socket_vmnet network
	I0913 11:19:51.814168    1697 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0913 11:19:51.814258    1697 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 11:19:51.814306    1697 cni.go:84] Creating CNI manager for ""
	I0913 11:19:51.814348    1697 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0913 11:19:51.814400    1697 start.go:340] cluster config:
	{Name:download-only-007000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-007000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 11:19:51.819465    1697 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 11:19:51.823702    1697 out.go:97] Downloading VM boot image ...
	I0913 11:19:51.823719    1697 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/iso/arm64/minikube-v1.34.0-1726156389-19616-arm64.iso
	I0913 11:20:07.894612    1697 out.go:97] Starting "download-only-007000" primary control-plane node in "download-only-007000" cluster
	I0913 11:20:07.894633    1697 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 11:20:07.951659    1697 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 11:20:07.951681    1697 cache.go:56] Caching tarball of preloaded images
	I0913 11:20:07.951833    1697 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 11:20:07.958013    1697 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0913 11:20:07.958024    1697 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 11:20:08.050954    1697 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 11:20:21.503120    1697 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 11:20:21.503283    1697 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 11:20:22.200836    1697 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0913 11:20:22.201083    1697 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/download-only-007000/config.json ...
	I0913 11:20:22.201100    1697 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/download-only-007000/config.json: {Name:mkd7dab0aea3bdd8331068015415d9340f95ea68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 11:20:22.201363    1697 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 11:20:22.201562    1697 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0913 11:20:23.196578    1697 out.go:193] 
	W0913 11:20:23.202729    1697 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19636-1170/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109359720 0x109359720 0x109359720 0x109359720 0x109359720 0x109359720 0x109359720] Decompressors:map[bz2:0x14000121230 gz:0x14000121238 tar:0x14000121190 tar.bz2:0x140001211d0 tar.gz:0x140001211e0 tar.xz:0x14000121210 tar.zst:0x14000121220 tbz2:0x140001211d0 tgz:0x140001211e0 txz:0x14000121210 tzst:0x14000121220 xz:0x14000121240 zip:0x14000121250 zst:0x14000121248] Getters:map[file:0x14000065b50 http:0x140007c01e0 https:0x140007c0230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0913 11:20:23.202754    1697 out_reason.go:110] 
	W0913 11:20:23.210735    1697 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 11:20:23.214607    1697 out.go:193] 
	
	
	* The control-plane node download-only-007000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-007000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-007000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (12.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-606000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-606000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (12.58176525s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (12.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-606000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-606000: exit status 85 (79.020167ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-007000 | jenkins | v1.34.0 | 13 Sep 24 11:19 PDT |                     |
	|         | -p download-only-007000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 13 Sep 24 11:20 PDT | 13 Sep 24 11:20 PDT |
	| delete  | -p download-only-007000        | download-only-007000 | jenkins | v1.34.0 | 13 Sep 24 11:20 PDT | 13 Sep 24 11:20 PDT |
	| start   | -o=json --download-only        | download-only-606000 | jenkins | v1.34.0 | 13 Sep 24 11:20 PDT |                     |
	|         | -p download-only-606000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 11:20:23
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 11:20:23.624898    1721 out.go:345] Setting OutFile to fd 1 ...
	I0913 11:20:23.625034    1721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:20:23.625037    1721 out.go:358] Setting ErrFile to fd 2...
	I0913 11:20:23.625039    1721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:20:23.625169    1721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 11:20:23.626227    1721 out.go:352] Setting JSON to true
	I0913 11:20:23.642305    1721 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1186,"bootTime":1726250437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 11:20:23.642366    1721 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 11:20:23.646766    1721 out.go:97] [download-only-606000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 11:20:23.646867    1721 notify.go:220] Checking for updates...
	I0913 11:20:23.650741    1721 out.go:169] MINIKUBE_LOCATION=19636
	I0913 11:20:23.653818    1721 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 11:20:23.657790    1721 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 11:20:23.660756    1721 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 11:20:23.663795    1721 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	W0913 11:20:23.669731    1721 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 11:20:23.669882    1721 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 11:20:23.673723    1721 out.go:97] Using the qemu2 driver based on user configuration
	I0913 11:20:23.673733    1721 start.go:297] selected driver: qemu2
	I0913 11:20:23.673737    1721 start.go:901] validating driver "qemu2" against <nil>
	I0913 11:20:23.673805    1721 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 11:20:23.676751    1721 out.go:169] Automatically selected the socket_vmnet network
	I0913 11:20:23.681846    1721 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0913 11:20:23.681932    1721 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 11:20:23.681950    1721 cni.go:84] Creating CNI manager for ""
	I0913 11:20:23.681971    1721 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 11:20:23.681976    1721 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 11:20:23.682009    1721 start.go:340] cluster config:
	{Name:download-only-606000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 11:20:23.685231    1721 iso.go:125] acquiring lock: {Name:mka2f435e3744f8609965fdd69c2f86dd20c7182 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 11:20:23.688774    1721 out.go:97] Starting "download-only-606000" primary control-plane node in "download-only-606000" cluster
	I0913 11:20:23.688782    1721 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 11:20:23.745637    1721 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 11:20:23.745654    1721 cache.go:56] Caching tarball of preloaded images
	I0913 11:20:23.745813    1721 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 11:20:23.750079    1721 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0913 11:20:23.750086    1721 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0913 11:20:23.868797    1721 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19636-1170/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-606000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-606000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-606000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.31s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-525000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-525000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-525000
--- PASS: TestBinaryMirror (0.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-166000
addons_test.go:975: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-166000: exit status 85 (56.169542ms)

                                                
                                                
-- stdout --
	* Profile "addons-166000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-166000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-166000
addons_test.go:986: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-166000: exit status 85 (59.980666ms)

                                                
                                                
-- stdout --
	* Profile "addons-166000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-166000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (200.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-166000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-166000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m20.113621125s)
--- PASS: TestAddons/Setup (200.11s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 7.966666ms
addons_test.go:843: volcano-admission stabilized in 8.005875ms
addons_test.go:851: volcano-controller stabilized in 8.0175ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-v68cc" [6a4bf0d6-2430-445f-a89f-500ef43a6f90] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004759625s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-vbmdg" [d70255d0-670b-4551-9bb3-4362ebc14210] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005430792s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-nztbl" [05388e77-809c-499b-9609-b9c00a5dd1f2] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00412175s
addons_test.go:870: (dbg) Run:  kubectl --context addons-166000 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-166000 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-166000 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [5c5e38ca-6b93-4c5b-839e-7d1e5cfb6896] Pending
helpers_test.go:344: "test-job-nginx-0" [5c5e38ca-6b93-4c5b-839e-7d1e5cfb6896] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [5c5e38ca-6b93-4c5b-839e-7d1e5cfb6896] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004436s
addons_test.go:906: (dbg) Run:  out/minikube-darwin-arm64 -p addons-166000 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-darwin-arm64 -p addons-166000 addons disable volcano --alsologtostderr -v=1: (10.01349575s)
--- PASS: TestAddons/serial/Volcano (38.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-166000 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-166000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-166000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-166000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-166000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3e205888-6dc5-4ce8-8474-a7b370c279ea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3e205888-6dc5-4ce8-8474-a7b370c279ea] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.007112833s
addons_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p addons-166000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-166000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-darwin-arm64 -p addons-166000 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p addons-166000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 -p addons-166000 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-darwin-arm64 -p addons-166000 addons disable ingress --alsologtostderr -v=1: (7.212273875s)
--- PASS: TestAddons/parallel/Ingress (19.11s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bxqh8" [bbcad02b-38bd-4248-afae-81c710361048] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00631225s
addons_test.go:789: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-166000
addons_test.go:789: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-166000: (5.226757875s)
--- PASS: TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 1.160833ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-pzfzj" [3fb84404-de93-448a-b27c-5ae8d61b4079] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005433083s
addons_test.go:413: (dbg) Run:  kubectl --context addons-166000 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p addons-166000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:505: csi-hostpath-driver pods stabilized in 2.645709ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-166000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-166000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1f0cb9c1-11b5-4fe2-8f06-f6278f5fd1cf] Pending
helpers_test.go:344: "task-pv-pod" [1f0cb9c1-11b5-4fe2-8f06-f6278f5fd1cf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1f0cb9c1-11b5-4fe2-8f06-f6278f5fd1cf] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.009329s
addons_test.go:528: (dbg) Run:  kubectl --context addons-166000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-166000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-166000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-166000 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-166000 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-166000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-166000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e41b1909-09e9-43b0-acd8-2e68caa0a2f8] Pending
helpers_test.go:344: "task-pv-pod-restore" [e41b1909-09e9-43b0-acd8-2e68caa0a2f8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e41b1909-09e9-43b0-acd8-2e68caa0a2f8] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.008287417s
addons_test.go:570: (dbg) Run:  kubectl --context addons-166000 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-166000 delete pod task-pv-pod-restore: (1.042918292s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-166000 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-166000 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-darwin-arm64 -p addons-166000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-darwin-arm64 -p addons-166000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.15223475s)
addons_test.go:586: (dbg) Run:  out/minikube-darwin-arm64 -p addons-166000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-166000 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-2k8bt" [9caa8f0d-7a7b-4878-b72d-21658315022f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-2k8bt" [9caa8f0d-7a7b-4878-b72d-21658315022f] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.01101575s
addons_test.go:777: (dbg) Run:  out/minikube-darwin-arm64 -p addons-166000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-darwin-arm64 -p addons-166000 addons disable headlamp --alsologtostderr -v=1: (5.277741542s)
--- PASS: TestAddons/parallel/Headlamp (16.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-wz7nx" [1afd50be-f492-4231-a831-fcac0a4d88c8] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004156917s
addons_test.go:808: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-166000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.96s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-166000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-166000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-166000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [21734066-e2da-4c6b-8d44-bf36b51256e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [21734066-e2da-4c6b-8d44-bf36b51256e5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [21734066-e2da-4c6b-8d44-bf36b51256e5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003494708s
addons_test.go:938: (dbg) Run:  kubectl --context addons-166000 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-darwin-arm64 -p addons-166000 ssh "cat /opt/local-path-provisioner/pvc-71858af5-0dcd-4beb-8a2f-1b15243c2fcb_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-166000 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-166000 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-darwin-arm64 -p addons-166000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-darwin-arm64 -p addons-166000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.47933s)
--- PASS: TestAddons/parallel/LocalPath (40.96s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jfh67" [363aebd6-7e4c-4855-b561-334e37188d45] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.010801667s
addons_test.go:1002: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-166000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-pmshm" [fbb56a98-ec16-4bd2-bbca-05af9333c3d0] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.014140334s
addons_test.go:1014: (dbg) Run:  out/minikube-darwin-arm64 -p addons-166000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-darwin-arm64 -p addons-166000 addons disable yakd --alsologtostderr -v=1: (5.390507292s)
--- PASS: TestAddons/parallel/Yakd (11.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-166000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-166000: (12.204944s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-166000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-166000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-166000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.62s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.62s)

                                                
                                    
x
+
TestErrorSpam/setup (34.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-057000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-057000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 --driver=qemu2 : (34.249147333s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (34.25s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 pause
--- PASS: TestErrorSpam/pause (0.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 unpause
--- PASS: TestErrorSpam/unpause (0.67s)

                                                
                                    
x
+
TestErrorSpam/stop (64.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 stop: (12.167094042s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 stop: (26.03213325s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-057000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-057000 stop: (26.036675792s)
--- PASS: TestErrorSpam/stop (64.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19636-1170/.minikube/files/etc/test/nested/copy/1695/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-033000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-033000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (47.997527167s)
--- PASS: TestFunctional/serial/StartWithProxy (48.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-033000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-033000 --alsologtostderr -v=8: (38.003515417s)
functional_test.go:663: soft start took 38.003936042s for "functional-033000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-033000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-033000 cache add registry.k8s.io/pause:3.1: (1.104236625s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-033000 cache add registry.k8s.io/pause:3.3: (1.058432958s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-033000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2864258027/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 cache add minikube-local-cache-test:functional-033000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-033000 cache add minikube-local-cache-test:functional-033000: (1.395532375s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 cache delete minikube-local-cache-test:functional-033000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-033000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-033000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (70.528375ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 kubectl -- --context functional-033000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.82s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-033000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-033000 get pods: (1.017875834s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-033000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-033000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.248582708s)
functional_test.go:761: restart took 37.248719208s for "functional-033000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-033000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd4196132446/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-033000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-033000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-033000: exit status 115 (151.251583ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30679 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-033000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-033000 config get cpus: exit status 14 (30.840875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-033000 config get cpus: exit status 14 (29.012459ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-033000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-033000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3023: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-033000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-033000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.619292ms)

                                                
                                                
-- stdout --
	* [functional-033000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 11:39:38.601255    3006 out.go:345] Setting OutFile to fd 1 ...
	I0913 11:39:38.601381    3006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:39:38.601384    3006 out.go:358] Setting ErrFile to fd 2...
	I0913 11:39:38.601386    3006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:39:38.601525    3006 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 11:39:38.602515    3006 out.go:352] Setting JSON to false
	I0913 11:39:38.619620    3006 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2342,"bootTime":1726250436,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 11:39:38.619694    3006 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 11:39:38.624167    3006 out.go:177] * [functional-033000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 11:39:38.634150    3006 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 11:39:38.634203    3006 notify.go:220] Checking for updates...
	I0913 11:39:38.642106    3006 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 11:39:38.645974    3006 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 11:39:38.649116    3006 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 11:39:38.652104    3006 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 11:39:38.655154    3006 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 11:39:38.658394    3006 config.go:182] Loaded profile config "functional-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 11:39:38.658646    3006 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 11:39:38.662130    3006 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 11:39:38.669133    3006 start.go:297] selected driver: qemu2
	I0913 11:39:38.669138    3006 start.go:901] validating driver "qemu2" against &{Name:functional-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 11:39:38.669182    3006 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 11:39:38.675079    3006 out.go:201] 
	W0913 11:39:38.679103    3006 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0913 11:39:38.683094    3006 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-033000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-033000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-033000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (109.914584ms)

                                                
                                                
-- stdout --
	* [functional-033000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 11:39:38.826522    3017 out.go:345] Setting OutFile to fd 1 ...
	I0913 11:39:38.826622    3017 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:39:38.826625    3017 out.go:358] Setting ErrFile to fd 2...
	I0913 11:39:38.826628    3017 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 11:39:38.826761    3017 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
	I0913 11:39:38.828143    3017 out.go:352] Setting JSON to false
	I0913 11:39:38.845287    3017 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2342,"bootTime":1726250436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 11:39:38.845390    3017 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 11:39:38.849153    3017 out.go:177] * [functional-033000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0913 11:39:38.856187    3017 out.go:177]   - MINIKUBE_LOCATION=19636
	I0913 11:39:38.856284    3017 notify.go:220] Checking for updates...
	I0913 11:39:38.863168    3017 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	I0913 11:39:38.866118    3017 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 11:39:38.869125    3017 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 11:39:38.870545    3017 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	I0913 11:39:38.874119    3017 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 11:39:38.877434    3017 config.go:182] Loaded profile config "functional-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 11:39:38.877686    3017 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 11:39:38.881908    3017 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0913 11:39:38.889160    3017 start.go:297] selected driver: qemu2
	I0913 11:39:38.889169    3017 start.go:901] validating driver "qemu2" against &{Name:functional-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19616/minikube-v1.34.0-1726156389-19616-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726193793-19634@sha256:4434bf9c4c4590e602ea482d2337d9d858a3db898bec2a85c17f78c81593c44e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 11:39:38.889224    3017 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 11:39:38.895005    3017 out.go:201] 
	W0913 11:39:38.899077    3017 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0913 11:39:38.903107    3017 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9b4e07d6-cecf-466d-8c62-5ace153ce6f2] Running
E0913 11:39:07.289671    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008389416s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-033000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-033000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-033000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-033000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8636546a-dab6-4f44-b502-632c58f6130c] Pending
helpers_test.go:344: "sp-pod" [8636546a-dab6-4f44-b502-632c58f6130c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8636546a-dab6-4f44-b502-632c58f6130c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.01209975s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-033000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-033000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-033000 delete -f testdata/storage-provisioner/pod.yaml: (1.014945833s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-033000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [795f1ce5-a74a-439a-966e-135b482219b0] Pending
helpers_test.go:344: "sp-pod" [795f1ce5-a74a-439a-966e-135b482219b0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [795f1ce5-a74a-439a-966e-135b482219b0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005882125s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-033000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.54s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh -n functional-033000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 cp functional-033000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2709143395/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh -n functional-033000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh -n functional-033000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1695/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "sudo cat /etc/test/nested/copy/1695/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1695.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "sudo cat /etc/ssl/certs/1695.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1695.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "sudo cat /usr/share/ca-certificates/1695.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/16952.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "sudo cat /etc/ssl/certs/16952.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/16952.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "sudo cat /usr/share/ca-certificates/16952.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-033000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-033000 ssh "sudo systemctl is-active crio": exit status 1 (153.576042ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-033000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-033000
docker.io/kicbase/echo-server:functional-033000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-033000 image ls --format short --alsologtostderr:
I0913 11:39:42.402389    3045 out.go:345] Setting OutFile to fd 1 ...
I0913 11:39:42.402561    3045 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 11:39:42.402564    3045 out.go:358] Setting ErrFile to fd 2...
I0913 11:39:42.402567    3045 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 11:39:42.402708    3045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
I0913 11:39:42.403116    3045 config.go:182] Loaded profile config "functional-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 11:39:42.403177    3045 config.go:182] Loaded profile config "functional-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 11:39:42.403965    3045 ssh_runner.go:195] Run: systemctl --version
I0913 11:39:42.403974    3045 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/functional-033000/id_rsa Username:docker}
I0913 11:39:42.437493    3045 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-033000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| localhost/my-image                          | functional-033000 | 099b38fd2c151 | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-033000 | 3476a7297a20b | 30B    |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/kicbase/echo-server               | functional-033000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-033000 image ls --format table --alsologtostderr:
I0913 11:39:44.442930    3057 out.go:345] Setting OutFile to fd 1 ...
I0913 11:39:44.443080    3057 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 11:39:44.443083    3057 out.go:358] Setting ErrFile to fd 2...
I0913 11:39:44.443085    3057 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 11:39:44.443212    3057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
I0913 11:39:44.443745    3057 config.go:182] Loaded profile config "functional-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 11:39:44.443807    3057 config.go:182] Loaded profile config "functional-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 11:39:44.444698    3057 ssh_runner.go:195] Run: systemctl --version
I0913 11:39:44.444708    3057 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/functional-033000/id_rsa Username:docker}
I0913 11:39:44.475221    3057 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/09/13 11:39:48 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-033000 image ls --format json --alsologtostderr:
[{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"3476a7297a20bc733b79ba125dfc7c22a4bcfcd071a9a11c8bfe93b78c726027","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-033000"],"size":"30"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"2f6c962e7b8
311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"099b38fd2c151050afce3a81464a028828738afa7fc3f4051e60f8e85d515bff","repoDigests":[],"repoTags":["l
ocalhost/my-image:functional-033000"],"size":"1410000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-033000"],"size":"4780000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-033000 image ls --format json --alsologtostderr:
I0913 11:39:44.365042    3055 out.go:345] Setting OutFile to fd 1 ...
I0913 11:39:44.365197    3055 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 11:39:44.365202    3055 out.go:358] Setting ErrFile to fd 2...
I0913 11:39:44.365205    3055 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 11:39:44.365340    3055 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
I0913 11:39:44.365788    3055 config.go:182] Loaded profile config "functional-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 11:39:44.365848    3055 config.go:182] Loaded profile config "functional-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 11:39:44.366741    3055 ssh_runner.go:195] Run: systemctl --version
I0913 11:39:44.366753    3055 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/functional-033000/id_rsa Username:docker}
I0913 11:39:44.395740    3055 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-033000 image ls --format yaml --alsologtostderr:
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-033000
size: "4780000"
- id: 3476a7297a20bc733b79ba125dfc7c22a4bcfcd071a9a11c8bfe93b78c726027
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-033000
size: "30"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-033000 image ls --format yaml --alsologtostderr:
I0913 11:39:42.486815    3047 out.go:345] Setting OutFile to fd 1 ...
I0913 11:39:42.486954    3047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 11:39:42.486957    3047 out.go:358] Setting ErrFile to fd 2...
I0913 11:39:42.486959    3047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 11:39:42.487080    3047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
I0913 11:39:42.487536    3047 config.go:182] Loaded profile config "functional-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 11:39:42.487601    3047 config.go:182] Loaded profile config "functional-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 11:39:42.488441    3047 ssh_runner.go:195] Run: systemctl --version
I0913 11:39:42.488450    3047 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/functional-033000/id_rsa Username:docker}
I0913 11:39:42.522500    3047 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-033000 ssh pgrep buildkitd: exit status 1 (62.951ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image build -t localhost/my-image:functional-033000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-033000 image build -t localhost/my-image:functional-033000 testdata/build --alsologtostderr: (1.651085s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-033000 image build -t localhost/my-image:functional-033000 testdata/build --alsologtostderr:
I0913 11:39:42.632198    3051 out.go:345] Setting OutFile to fd 1 ...
I0913 11:39:42.632458    3051 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 11:39:42.632461    3051 out.go:358] Setting ErrFile to fd 2...
I0913 11:39:42.632463    3051 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 11:39:42.632600    3051 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19636-1170/.minikube/bin
I0913 11:39:42.633093    3051 config.go:182] Loaded profile config "functional-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 11:39:42.633787    3051 config.go:182] Loaded profile config "functional-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 11:39:42.634669    3051 ssh_runner.go:195] Run: systemctl --version
I0913 11:39:42.634681    3051 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19636-1170/.minikube/machines/functional-033000/id_rsa Username:docker}
I0913 11:39:42.663810    3051 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.401137307.tar
I0913 11:39:42.663902    3051 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0913 11:39:42.668541    3051 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.401137307.tar
I0913 11:39:42.670313    3051 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.401137307.tar: stat -c "%s %y" /var/lib/minikube/build/build.401137307.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.401137307.tar': No such file or directory
I0913 11:39:42.670328    3051 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.401137307.tar --> /var/lib/minikube/build/build.401137307.tar (3072 bytes)
I0913 11:39:42.680343    3051 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.401137307
I0913 11:39:42.683718    3051 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.401137307 -xf /var/lib/minikube/build/build.401137307.tar
I0913 11:39:42.687248    3051 docker.go:360] Building image: /var/lib/minikube/build/build.401137307
I0913 11:39:42.687302    3051 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-033000 /var/lib/minikube/build/build.401137307
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:099b38fd2c151050afce3a81464a028828738afa7fc3f4051e60f8e85d515bff done
#8 naming to localhost/my-image:functional-033000 done
#8 DONE 0.0s
I0913 11:39:44.239308    3051 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-033000 /var/lib/minikube/build/build.401137307: (1.552073083s)
I0913 11:39:44.239398    3051 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.401137307
I0913 11:39:44.243195    3051 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.401137307.tar
I0913 11:39:44.246318    3051 build_images.go:217] Built localhost/my-image:functional-033000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.401137307.tar
I0913 11:39:44.246336    3051 build_images.go:133] succeeded building to: functional-033000
I0913 11:39:44.246339    3051 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.7021625s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-033000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-033000 docker-env) && out/minikube-darwin-arm64 status -p functional-033000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-033000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-033000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-033000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-h4tdp" [5c18d1ac-804d-47f3-b5f9-b484b232ed75] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-h4tdp" [5c18d1ac-804d-47f3-b5f9-b484b232ed75] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0913 11:38:59.602674    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:39:02.166115    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.008529416s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image load --daemon kicbase/echo-server:functional-033000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image load --daemon kicbase/echo-server:functional-033000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-033000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image load --daemon kicbase/echo-server:functional-033000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image save kicbase/echo-server:functional-033000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image rm kicbase/echo-server:functional-033000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-033000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 image save --daemon kicbase/echo-server:functional-033000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-033000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-033000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-033000 tunnel --alsologtostderr]
E0913 11:38:57.019771    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:38:57.027801    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:38:57.041250    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:38:57.062693    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-033000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-033000 tunnel --alsologtostderr] ...
E0913 11:38:57.106121    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:508: unable to kill pid 2858: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
E0913 11:38:57.188839    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-033000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-033000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8c687f44-5b3c-4360-85c6-ba0cfeec3065] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0913 11:38:57.352251    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:38:57.675642    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:38:58.319038    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "nginx-svc" [8c687f44-5b3c-4360-85c6-ba0cfeec3065] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.010172958s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 service list -o json
functional_test.go:1494: Took "85.444ms" to run "out/minikube-darwin-arm64 -p functional-033000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30860
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30860
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-033000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.109.215 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-033000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "87.77175ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.64225ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "88.030333ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.401458ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-033000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3959825655/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726252770019336000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3959825655/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726252770019336000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3959825655/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726252770019336000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3959825655/001/test-1726252770019336000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.853459ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (71.934167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 13 18:39 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 13 18:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 13 18:39 test-1726252770019336000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh cat /mount-9p/test-1726252770019336000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-033000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1a208572-4875-4b1d-a54f-0f065a65fb08] Pending
helpers_test.go:344: "busybox-mount" [1a208572-4875-4b1d-a54f-0f065a65fb08] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1a208572-4875-4b1d-a54f-0f065a65fb08] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1a208572-4875-4b1d-a54f-0f065a65fb08] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.008183584s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-033000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-033000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3959825655/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-033000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2906958340/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (66.197709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-033000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2906958340/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-033000 ssh "sudo umount -f /mount-9p": exit status 1 (67.654125ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-033000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-033000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2906958340/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-033000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup75650668/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-033000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup75650668/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-033000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup75650668/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T" /mount1: exit status 1 (73.289334ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T" /mount1: exit status 1 (101.74275ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T" /mount1: exit status 1 (103.003958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0913 11:39:38.014591    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-033000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-033000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-033000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup75650668/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-033000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup75650668/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-033000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup75650668/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-033000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-033000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-033000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (178.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-988000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0913 11:40:18.975924    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
E0913 11:41:40.895073    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/addons-166000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-988000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (2m58.026023166s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (178.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-988000 -- rollout status deployment/busybox: (2.767396375s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- exec busybox-7dff88458-h6frq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- exec busybox-7dff88458-qw5r6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- exec busybox-7dff88458-tlkls -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- exec busybox-7dff88458-h6frq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- exec busybox-7dff88458-qw5r6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- exec busybox-7dff88458-tlkls -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- exec busybox-7dff88458-h6frq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- exec busybox-7dff88458-qw5r6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- exec busybox-7dff88458-tlkls -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- exec busybox-7dff88458-h6frq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- exec busybox-7dff88458-h6frq -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- exec busybox-7dff88458-qw5r6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- exec busybox-7dff88458-qw5r6 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- exec busybox-7dff88458-tlkls -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-988000 -- exec busybox-7dff88458-tlkls -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (51.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-988000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-988000 -v=7 --alsologtostderr: (51.456197541s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (51.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-988000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp testdata/cp-test.txt ha-988000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile133649973/001/cp-test_ha-988000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000:/home/docker/cp-test.txt ha-988000-m02:/home/docker/cp-test_ha-988000_ha-988000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m02 "sudo cat /home/docker/cp-test_ha-988000_ha-988000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000:/home/docker/cp-test.txt ha-988000-m03:/home/docker/cp-test_ha-988000_ha-988000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m03 "sudo cat /home/docker/cp-test_ha-988000_ha-988000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000:/home/docker/cp-test.txt ha-988000-m04:/home/docker/cp-test_ha-988000_ha-988000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m04 "sudo cat /home/docker/cp-test_ha-988000_ha-988000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp testdata/cp-test.txt ha-988000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile133649973/001/cp-test_ha-988000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000-m02:/home/docker/cp-test.txt ha-988000:/home/docker/cp-test_ha-988000-m02_ha-988000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000 "sudo cat /home/docker/cp-test_ha-988000-m02_ha-988000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000-m02:/home/docker/cp-test.txt ha-988000-m03:/home/docker/cp-test_ha-988000-m02_ha-988000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m03 "sudo cat /home/docker/cp-test_ha-988000-m02_ha-988000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000-m02:/home/docker/cp-test.txt ha-988000-m04:/home/docker/cp-test_ha-988000-m02_ha-988000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m04 "sudo cat /home/docker/cp-test_ha-988000-m02_ha-988000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp testdata/cp-test.txt ha-988000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile133649973/001/cp-test_ha-988000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000-m03:/home/docker/cp-test.txt ha-988000:/home/docker/cp-test_ha-988000-m03_ha-988000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000 "sudo cat /home/docker/cp-test_ha-988000-m03_ha-988000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000-m03:/home/docker/cp-test.txt ha-988000-m02:/home/docker/cp-test_ha-988000-m03_ha-988000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m02 "sudo cat /home/docker/cp-test_ha-988000-m03_ha-988000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000-m03:/home/docker/cp-test.txt ha-988000-m04:/home/docker/cp-test_ha-988000-m03_ha-988000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m04 "sudo cat /home/docker/cp-test_ha-988000-m03_ha-988000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp testdata/cp-test.txt ha-988000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile133649973/001/cp-test_ha-988000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000-m04:/home/docker/cp-test.txt ha-988000:/home/docker/cp-test_ha-988000-m04_ha-988000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000 "sudo cat /home/docker/cp-test_ha-988000-m04_ha-988000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000-m04:/home/docker/cp-test.txt ha-988000-m02:/home/docker/cp-test_ha-988000-m04_ha-988000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m02 "sudo cat /home/docker/cp-test_ha-988000-m04_ha-988000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 cp ha-988000-m04:/home/docker/cp-test.txt ha-988000-m03:/home/docker/cp-test_ha-988000-m04_ha-988000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-988000 ssh -n ha-988000-m03 "sudo cat /home/docker/cp-test_ha-988000-m04_ha-988000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0913 11:53:53.346063    1695 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19636-1170/.minikube/profiles/functional-033000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.08038975s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.23s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-613000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-613000 --output=json --user=testUser: (3.231693333s)
--- PASS: TestJSONOutput/stop/Command (3.23s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-985000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-985000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (102.991125ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4bdea2f6-fe87-4827-9c24-e9e09146219b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-985000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fa415c8a-266c-4ec6-a0e0-b35eeae3dd76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19636"}}
	{"specversion":"1.0","id":"6f32baf5-e9e3-4778-b63a-aaecb4eb3acd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig"}}
	{"specversion":"1.0","id":"019c7d6c-ac96-492e-b4ca-9b40d6769c3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"554d8c16-6040-424b-a5cd-eb0b41eaaafa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"35f4e24b-6b05-4775-af22-f8685f58fd25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube"}}
	{"specversion":"1.0","id":"b20ef5a5-41b8-4515-98a7-26380cb2bbec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4a29bd6a-84a1-436b-914a-be05bdeec5f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-985000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-985000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-114000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-114000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.59225ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-114000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19636
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19636-1170/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19636-1170/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-114000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-114000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.543708ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-114000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-114000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.725867917s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.651350125s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-114000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-114000: (2.075908958s)
--- PASS: TestNoKubernetes/serial/Stop (2.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-114000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-114000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.543125ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-114000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-114000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-748000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-556000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-556000 --alsologtostderr -v=3: (3.215108875s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-556000 -n old-k8s-version-556000: exit status 7 (35.321375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-556000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-560000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-560000 --alsologtostderr -v=3: (2.66055175s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-560000 -n no-preload-560000: exit status 7 (54.293958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-560000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-085000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-085000 --alsologtostderr -v=3: (3.93610725s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-085000 -n embed-certs-085000: exit status 7 (55.321125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-085000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-923000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-923000 --alsologtostderr -v=3: (3.31653925s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-923000 -n default-k8s-diff-port-923000: exit status 7 (59.34975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-923000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-175000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-175000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-175000 --alsologtostderr -v=3: (3.520606s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-175000 -n newest-cni-175000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-175000 -n newest-cni-175000: exit status 7 (56.059458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-175000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/273)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-151000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-151000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-151000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-151000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-151000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-151000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-151000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-151000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-151000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-151000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-151000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-151000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-151000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-151000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-151000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-151000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-151000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-151000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-151000"

                                                
                                                
----------------------- debugLogs end: cilium-151000 [took: 2.224608s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-151000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-151000
--- SKIP: TestNetworkPlugins/group/cilium (2.33s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-861000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-861000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard